Classification Logistic Regression Learner
Source:R/learner_stepPlr_classif_plr.R
mlr_learners_classif.stepPlr.RdLogistic regression with a quadratic penalization on the coefficient.
Calls stepPlr::plr() from stepPlr.
Parameters
| Id | Type | Default | Levels | Range |
| cp | character | aic | aic, bic | - |
| lambda | numeric | 1e-04 | \([0, \infty)\) | |
| offset.coefficients | untyped | - | - | |
| offset.subset | untyped | - | - |
References
Park, Young M, Hastie, Trevor (2007). “Penalized logistic regression for detecting gene interactions.” Biostatistics, 9(1), 30-50. ISSN 1465-4644, doi:10.1093/biostatistics/kxm010 , https://doi.org/10.1093/biostatistics/kxm010.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter2/data_and_basic_modeling.html#sec-learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifStepPlr
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.stepPlr")
print(learner)
#>
#> ── <LearnerClassifStepPlr> (classif.stepPlr): Logistic Regression with a L2 Pena
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and stepPlr
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: twoclass and weights
#> • Other settings: use_weights = 'use', predict_raw = 'FALSE'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#>
#> Call:
#> stepPlr::plr(x = data, y = y)
#>
#> Coefficients:
#> Intercept V1 V10 V11 V12 V13 V14 V15
#> 25.79568 -39.47300 13.61152 -14.93734 -21.71626 34.99793 -39.40064 24.20333
#> V16 V17 V18 V19 V2 V20 V21 V22
#> -7.57436 20.30948 -10.12348 -12.53550 -31.65902 6.23221 9.63750 -26.90005
#> V23 V24 V25 V26 V27 V28 V29 V3
#> 11.77947 -10.30387 -11.16221 25.99713 -10.48677 1.96620 1.67119 50.87251
#> V30 V31 V32 V33 V34 V35 V36 V37
#> -29.76147 39.71731 -27.32958 14.24514 -8.19726 -15.32349 40.64620 -31.81419
#> V38 V39 V4 V40 V41 V42 V43 V44
#> 20.61093 -37.56760 -57.32719 46.70984 -9.31629 -2.95566 -25.74449 42.45411
#> V45 V46 V47 V48 V49 V5 V50 V51
#> -34.80434 5.98694 -6.05737 -65.20234 -7.23119 -4.33211 52.02833 -36.66587
#> V52 V53 V54 V55 V56 V57 V58 V59
#> -84.76803 -47.31759 -9.76264 -26.20719 0.88875 -6.18351 -18.82569 -16.69885
#> V6 V60 V7 V8 V9
#> -41.43344 -0.56390 61.63136 1.38726 -37.08786
#>
#> Null deviance: 188.87 on 138 degrees of freedom
#> Residual deviance: 14.09 on 93.72 degrees of freedom
#> Score: deviance + 4.9 * df = 237.5
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.3188406