Skip to contents

Logistic regression with a quadratic penalization on the coefficient. Calls stepPlr::plr() from stepPlr.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.stepPlr")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”

  • Required Packages: mlr3, stepPlr

Parameters

IdTypeDefaultLevelsRange
cpcharacteraicaic, bic-
lambdanumeric1e-04\([0, \infty)\)
offset.coefficientsuntyped--
offset.subsetuntyped--

References

Park, Young M, Hastie, Trevor (2007). “Penalized logistic regression for detecting gene interactions.” Biostatistics, 9(1), 30-50. ISSN 1465-4644, doi:10.1093/biostatistics/kxm010 , https://doi.org/10.1093/biostatistics/kxm010.

See also

Author

annanzrv

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifStepPlr

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifStepPlr$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.stepPlr")
print(learner)
#> 
#> ── <LearnerClassifStepPlr> (classif.stepPlr): Logistic Regression with a L2 Pena
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and stepPlr
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: twoclass and weights
#> • Other settings: use_weights = 'use', predict_raw = 'FALSE'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> Call:
#> stepPlr::plr(x = data, y = y)
#> 
#> Coefficients:
#> Intercept        V1       V10       V11       V12       V13       V14       V15 
#>  20.47927 -32.62956 -23.16993 -12.64403 -26.20383  19.82591 -14.48455  13.93571 
#>       V16       V17       V18       V19        V2       V20       V21       V22 
#> -14.50593  23.94875 -16.02441  30.08670  -5.64180 -29.41186  14.00001 -27.15045 
#>       V23       V24       V25       V26       V27       V28       V29        V3 
#>  19.86076 -48.74213  39.99740   5.19521 -30.81699  27.89744 -15.99912  22.81858 
#>       V30       V31       V32       V33       V34       V35       V36       V37 
#> -11.54512  50.88096 -42.44599   1.49429  30.81123 -44.37293  48.51944   9.48055 
#>       V38       V39        V4       V40       V41       V42       V43       V44 
#> -23.18606   3.99588 -21.51605  -4.92179  21.96575 -19.14691 -35.87289  29.61197 
#>       V45       V46       V47       V48       V49        V5       V50       V51 
#> -40.99668  -1.52141  29.92169 -93.55116 -22.05459  -9.29698  27.61401 -23.25706 
#>       V52       V53       V54       V55       V56       V57       V58       V59 
#> -23.83092 -14.72860   8.03415   8.37467  13.12608   9.36455   9.22638 -15.41200 
#>        V6       V60        V7        V8        V9 
#>   1.93091 -25.87198  17.88712  44.14099 -14.44333 
#> 
#>     Null deviance: 191.48 on 138 degrees of freedom
#> Residual deviance: 8.33 on 96.58 degrees of freedom
#>             Score: deviance + 4.9 * df = 217.63 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2173913