Skip to contents

Logistic regression with a quadratic penalization on the coefficient. Calls stepPlr::plr() from stepPlr.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.stepPlr")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”

  • Required Packages: mlr3, stepPlr

Parameters

IdTypeDefaultLevelsRange
cpcharacteraicaic, bic-
lambdanumeric1e-04\([0, \infty)\)
offset.coefficientsuntyped--
offset.subsetuntyped--

References

Park, Young M, Hastie, Trevor (2007). “Penalized logistic regression for detecting gene interactions.” Biostatistics, 9(1), 30-50. ISSN 1465-4644, doi:10.1093/biostatistics/kxm010 , https://doi.org/10.1093/biostatistics/kxm010.

See also

Author

annanzrv

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifStepPlr

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifStepPlr$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.stepPlr")
print(learner)
#> 
#> ── <LearnerClassifStepPlr> (classif.stepPlr): Logistic Regression with a L2 Pena
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and stepPlr
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: twoclass and weights
#> • Other settings: use_weights = 'use'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> Call:
#> stepPlr::plr(x = data, y = y)
#> 
#> Coefficients:
#> Intercept        V1       V10       V11       V12       V13       V14       V15 
#>  20.00721 -51.70529  19.36628  -0.81828 -24.61997 -27.65774  18.92948  -6.56113 
#>       V16       V17       V18       V19        V2       V20       V21       V22 
#>   0.78734  39.28720 -14.07665 -22.84003 -52.32212  -0.64998  17.70676 -36.34594 
#>       V23       V24       V25       V26       V27       V28       V29        V3 
#>  41.85174 -48.34981  22.06474   8.72136  -2.55066  -3.23728   9.26740  87.51252 
#>       V30       V31       V32       V33       V34       V35       V36       V37 
#> -32.61190  49.24862 -25.51906  -3.76044  21.79393 -17.88758  16.58914  -3.60510 
#>       V38       V39        V4       V40       V41       V42       V43       V44 
#>  13.96634 -11.09300 -35.43298   6.91907  18.81394 -19.36331  13.81329 -32.22989 
#>       V45       V46       V47       V48       V49        V5       V50       V51 
#>  -5.78358  14.82041 -22.04215 -65.67949 -44.57112  -4.48500  -8.31061 -40.12679 
#>       V52       V53       V54       V55       V56       V57       V58       V59 
#> -38.02874 -24.03685 -10.63623  -3.45862  -9.65920 -22.09487 -17.96865  -4.60268 
#>        V6       V60        V7        V8        V9 
#>  -7.62325 -12.70277  15.63205  -5.24974 -30.70403 
#> 
#>     Null deviance: 192.63 on 138 degrees of freedom
#> Residual deviance: 9.7 on 96.58 degrees of freedom
#>             Score: deviance + 4.9 * df = 219.03 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2753623