Skip to contents

Logistic regression with a quadratic penalization on the coefficient. Calls stepPlr::plr() from stepPlr.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.stepPlr")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”

  • Required Packages: mlr3, stepPlr

Parameters

IdTypeDefaultLevelsRange
cpcharacteraicaic, bic-
lambdanumeric1e-04\([0, \infty)\)
offset.coefficientsuntyped--
offset.subsetuntyped--

References

Park, Young M, Hastie, Trevor (2007). “Penalized logistic regression for detecting gene interactions.” Biostatistics, 9(1), 30-50. ISSN 1465-4644, doi:10.1093/biostatistics/kxm010 , https://doi.org/10.1093/biostatistics/kxm010.

See also

Author

annanzrv

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifStepPlr

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifStepPlr$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.stepPlr")
print(learner)
#> 
#> ── <LearnerClassifStepPlr> (classif.stepPlr): Logistic Regression with a L2 Pena
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and stepPlr
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: twoclass and weights
#> • Other settings: use_weights = 'use', predict_raw = 'FALSE'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> Call:
#> stepPlr::plr(x = data, y = y)
#> 
#> Coefficients:
#> Intercept        V1       V10       V11       V12       V13       V14       V15 
#>  24.83185 -28.39825  21.12407 -35.88934 -10.32200  14.78449  -5.64233  18.84495 
#>       V16       V17       V18       V19        V2       V20       V21       V22 
#>  -8.29678   7.02900 -12.25860  14.28044 -23.48939 -38.74998  45.32683 -43.67588 
#>       V23       V24       V25       V26       V27       V28       V29        V3 
#>  44.19479 -50.64301  33.55265 -29.44830  26.04878 -21.49040   5.36826  38.00450 
#>       V30       V31       V32       V33       V34       V35       V36       V37 
#> -24.81642  44.44179 -22.52509  -2.81129  13.35192 -18.91877  27.98612   0.62712 
#>       V38       V39        V4       V40       V41       V42       V43       V44 
#>   1.50183 -11.90783   2.34355  12.87097  -4.38842  -3.94327 -28.64838   9.97509 
#>       V45       V46       V47       V48       V49        V5       V50       V51 
#>  -7.15095   6.03322  10.85503 -38.43902 -73.53414   8.37760  55.92232 -92.51202 
#>       V52       V53       V54       V55       V56       V57       V58       V59 
#> -49.95815 -26.66006  -8.14432   0.00282  21.57738  39.18482 -32.04272  -8.62328 
#>        V6       V60        V7        V8        V9 
#> -20.29468 -22.48282  23.10503  18.34899 -34.75738 
#> 
#>     Null deviance: 192.63 on 138 degrees of freedom
#> Residual deviance: 23.67 on 92.12 degrees of freedom
#>             Score: deviance + 4.9 * df = 254.98 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2753623