Classification Logistic Regression Learner
Source:R/learner_RWeka_classif_logistic.R
mlr_learners_classif.logistic.RdMultinomial Logistic Regression model with a ridge estimator.
Calls RWeka::Logistic() from RWeka.
Custom mlr3 parameters
output_debug_info:original id: output-debug-info
do_not_check_capabilities:original id: do-not-check-capabilities
num_decimal_places:original id: num-decimal-places
batch_size:original id: batch-size
Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern
Parameters
| Id | Type | Default | Levels | Range |
| subset | untyped | - | - | |
| na.action | untyped | - | - | |
| C | logical | FALSE | TRUE, FALSE | - |
| R | numeric | - | \((-\infty, \infty)\) | |
| M | integer | -1 | \((-\infty, \infty)\) | |
| output_debug_info | logical | FALSE | TRUE, FALSE | - |
| do_not_check_capabilities | logical | FALSE | TRUE, FALSE | - |
| num_decimal_places | integer | 2 | \([1, \infty)\) | |
| batch_size | integer | 100 | \([1, \infty)\) | |
| options | untyped | NULL | - |
References
le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter2/data_and_basic_modeling.html#sec-learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Method marshal()
Marshal the learner's model.
Arguments
...(any)
Additional arguments passed tomlr3::marshal_model().
Method unmarshal()
Unmarshal the learner's model.
Arguments
...(any)
Additional arguments passed tomlr3::unmarshal_model().
Examples
# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#>
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error', predict_raw = 'FALSE'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#> Class
#> Variable M
#> ===================================
#> V1 467.3798
#> V10 -41.5432
#> V11 -22.8097
#> V12 336.4818
#> V13 -143.4361
#> V14 -104.9819
#> V15 214.5262
#> V16 -179.7404
#> V17 -151.6148
#> V18 345.8181
#> V19 -165.8075
#> V2 -253.8551
#> V20 89.3637
#> V21 -66.6925
#> V22 24.9585
#> V23 -57.2395
#> V24 155.6309
#> V25 -46.6738
#> V26 -19.8685
#> V27 80.1766
#> V28 0.5524
#> V29 -81.8569
#> V3 -1242.2082
#> V30 138.9951
#> V31 -179.8879
#> V32 -83.0028
#> V33 180.4573
#> V34 -143.9717
#> V35 170.0045
#> V36 -189.0555
#> V37 -62.3555
#> V38 87.5014
#> V39 146.317
#> V4 949.0878
#> V40 -349.8408
#> V41 109.1339
#> V42 -63.4086
#> V43 22.4046
#> V44 29.4181
#> V45 123.8087
#> V46 161.8228
#> V47 406.3602
#> V48 -155.2853
#> V49 1298.7387
#> V5 -358.7235
#> V50 -2677.2698
#> V51 -404.4741
#> V52 1098.3966
#> V53 764.6838
#> V54 2475.1307
#> V55 -848.241
#> V56 -1040.1306
#> V57 -1503.5589
#> V58 5979.7596
#> V59 688.1689
#> V6 223.5415
#> V60 -161.1026
#> V7 -352.3288
#> V8 -367.7334
#> V9 276.1759
#> Intercept -108.9161
#>
#>
#> Odds Ratios...
#> Class
#> Variable M
#> ===================================
#> V1 9.5601675908207E202
#> V10 0
#> V11 0
#> V12 1.3557230335009943E146
#> V13 0
#> V14 0
#> V15 1.4708415966907256E93
#> V16 0
#> V17 0
#> V18 1.5377627877917286E150
#> V19 0
#> V2 0
#> V20 6.4591203255189745E38
#> V21 0
#> V22 6.907490464758876E10
#> V23 0
#> V24 3.8872965995820884E67
#> V25 0
#> V26 0
#> V27 6.610506609274582E34
#> V28 1.7375
#> V29 0
#> V3 0
#> V30 2.3162830245032483E60
#> V31 0
#> V32 0
#> V33 2.3530408464450086E78
#> V34 0
#> V35 6.792507061467103E73
#> V36 0
#> V37 0
#> V38 1.0031674741039563E38
#> V39 3.504888801660466E63
#> V4 Infinity
#> V40 0
#> V41 2.4903654615641384E47
#> V42 0
#> V43 5372848396.3334
#> V44 5.971901768246407E12
#> V45 5.880762513910094E53
#> V46 1.899925357378846E70
#> V47 3.020047118571485E176
#> V48 0
#> V49 Infinity
#> V5 0
#> V50 0
#> V51 0
#> V52 Infinity
#> V53 Infinity
#> V54 Infinity
#> V55 0
#> V56 0
#> V57 0
#> V58 Infinity
#> V59 7.378317490332262E298
#> V6 1.2101184149567128E97
#> V60 0
#> V7 0
#> V8 0
#> V9 8.74361388943414E119
#>
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.1449275