Skip to contents

Penalized (L1 and L2) Logistic Regression. Calls penalized::penalized() from: penalized.

Details

The penalized and unpenalized arguments in the learner are implemented slightly differently than in penalized::penalized(). Here, there is no parameter for penalized but instead it is assumed that every variable is penalized unless stated in the unpenalized parameter.

Initial parameter values

  • trace is set to "FALSE" to disable printing output during model training.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.penalized")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, penalized

Parameters

IdTypeDefaultLevelsRange
epsilonnumeric1e-10\([0, \infty)\)
fusedllogicalFALSETRUE, FALSE-
lambda1numeric0\([0, \infty)\)
lambda2numeric0\([0, \infty)\)
maxiterinteger-\([1, \infty)\)
positiveuntypedFALSE-
standardizelogicalFALSETRUE, FALSE-
startbetauntyped--
startgammauntyped--
stepsuntyped1L-
tracelogicalTRUETRUE, FALSE-
unpenalizeduntyped--

References

Goeman, J J (2010). “L1 penalized estimation in the Cox proportional hazards model.” Biometrical journal, 52(1), 70–84.

See also

Author

annanzrv

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifPenalized

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifPenalized$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.penalized")
print(learner)
#> 
#> ── <LearnerClassifPenalized> (classif.penalized): Penalized Logistic Regression 
#> • Model: -
#> • Parameters: trace=FALSE
#> • Packages: mlr3 and penalized
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Penalized logistic regression object
#> Model failed to converge


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2898551