Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         963.2409
#> V10                         180.949
#> V11                        100.2896
#> V12                         36.5432
#> V13                        -43.3207
#> V14                        257.6831
#> V15                       -113.1665
#> V16                        -47.2853
#> V17                         -89.839
#> V18                          6.7985
#> V19                        101.7469
#> V2                         371.3584
#> V20                          10.328
#> V21                        -32.7191
#> V22                         79.1803
#> V23                        -32.0114
#> V24                         71.7556
#> V25                         20.4777
#> V26                        -48.7538
#> V27                        -17.2345
#> V28                        -23.1968
#> V29                          3.6524
#> V3                        -737.8031
#> V30                        296.5409
#> V31                       -334.2215
#> V32                        284.8906
#> V33                        -98.1006
#> V34                        -44.9557
#> V35                         57.0081
#> V36                       -178.9856
#> V37                        185.0774
#> V38                       -111.6702
#> V39                          76.328
#> V4                         104.1988
#> V40                       -254.9798
#> V41                        117.9697
#> V42                        130.3248
#> V43                        100.2415
#> V44                        -62.6831
#> V45                         -1.9495
#> V46                        110.5788
#> V47                       -118.0625
#> V48                       -399.7462
#> V49                        892.2377
#> V5                         240.4746
#> V50                      -1352.1521
#> V51                       2634.7568
#> V52                         719.506
#> V53                       -767.2427
#> V54                        987.6634
#> V55                      -1483.3753
#> V56                       1827.9333
#> V57                      -2260.1346
#> V58                       -967.6498
#> V59                        881.6058
#> V6                          -69.335
#> V60                       1061.9431
#> V7                         -124.656
#> V8                        -256.1079
#> V9                          52.9498
#> Intercept                 -195.6887
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         Infinity
#> V10            3.847370317640745E78
#> V11            3.591180380986417E43
#> V12            7.421550430339538E15
#> V13                               0
#> V14           8.135133927081057E111
#> V15                               0
#> V16                               0
#> V17                               0
#> V18                        896.5423
#> V19            1.542152677147186E44
#> V2           1.9007202303576837E161
#> V20                      30576.9837
#> V21                               0
#> V22           2.4410443582799312E34
#> V23                               0
#> V24           1.4556310844914721E31
#> V25                  782291815.0178
#> V26                               0
#> V27                               0
#> V28                               0
#> V29                         38.5676
#> V3                                0
#> V30           6.110727894539722E128
#> V31                               0
#> V32           5.326166986603397E123
#> V33                               0
#> V34                               0
#> V35            5.731808943493481E24
#> V36                               0
#> V37           2.3884286934089553E80
#> V38                               0
#> V39           1.4087370184379145E33
#> V4             1.790385661245591E45
#> V40                               0
#> V41           1.7123834034487543E51
#> V42           3.9750066690859495E56
#> V43           3.4222440846750425E43
#> V44                               0
#> V45                          0.1424
#> V46           1.0562476720310476E48
#> V47                               0
#> V48                               0
#> V49                        Infinity
#> V5           2.7340552569828604E104
#> V50                               0
#> V51                        Infinity
#> V52                        Infinity
#> V53                               0
#> V54                        Infinity
#> V55                               0
#> V56                        Infinity
#> V57                               0
#> V58                               0
#> V59                        Infinity
#> V6                                0
#> V60                        Infinity
#> V7                                0
#> V8                                0
#> V9             9.903733459884594E22
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2898551