Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         234.4741
#> V10                       -168.3497
#> V11                         95.8339
#> V12                        126.0161
#> V13                        120.0834
#> V14                        -83.7475
#> V15                          2.3399
#> V16                        -34.2557
#> V17                          9.0959
#> V18                        -26.8416
#> V19                         44.8569
#> V2                         505.0166
#> V20                        -26.6068
#> V21                         -2.3816
#> V22                        -24.9652
#> V23                          0.1337
#> V24                        159.0497
#> V25                        -56.3656
#> V26                       -114.2229
#> V27                         32.6665
#> V28                        -14.9416
#> V29                        -37.7311
#> V3                        -531.5841
#> V30                        158.2676
#> V31                       -216.4763
#> V32                         33.5987
#> V33                         55.1337
#> V34                       -102.8862
#> V35                        128.8825
#> V36                       -156.5877
#> V37                        -13.3755
#> V38                        -58.7303
#> V39                        103.0508
#> V4                          21.0206
#> V40                        -39.5133
#> V41                        -35.7022
#> V42                         35.5515
#> V43                         14.5269
#> V44                         54.3705
#> V45                          76.358
#> V46                        117.6125
#> V47                       -188.6731
#> V48                        611.2193
#> V49                       -357.2244
#> V5                         -20.7021
#> V50                       -165.6328
#> V51                       -660.0381
#> V52                         897.838
#> V53                        593.1662
#> V54                       -204.7508
#> V55                       -4102.361
#> V56                        884.8135
#> V57                      -1544.5834
#> V58                       1929.5003
#> V59                       2081.3718
#> V6                          11.0069
#> V60                      -2984.4924
#> V7                        -233.1781
#> V8                         -22.0477
#> V9                         136.8542
#> Intercept                   36.4046
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1            6.773093712238356E101
#> V10                               0
#> V11             4.17012235709738E41
#> V12           5.3468885122110196E54
#> V13           1.4175625944937342E52
#> V14                               0
#> V15                         10.3803
#> V16                               0
#> V17                       8918.7941
#> V18                               0
#> V19            3.027699100984806E19
#> V2           2.1180576930437113E219
#> V20                               0
#> V21                          0.0924
#> V22                               0
#> V23                           1.143
#> V24           1.1868865288744851E69
#> V25                               0
#> V26                               0
#> V27           1.5377828922006897E14
#> V28                               0
#> V29                               0
#> V3                                0
#> V30            5.429446741361161E68
#> V31                               0
#> V32           3.9059212082766444E14
#> V33            8.795167006010712E23
#> V34                               0
#> V35             9.39674958563154E55
#> V36                               0
#> V37                               0
#> V38                               0
#> V39           5.6808778217904796E44
#> V4                  1346227752.3527
#> V40                               0
#> V41                               0
#> V42           2.7529688953268175E15
#> V43                    2036719.8113
#> V44            4.100250323271359E23
#> V45           1.4516471825283777E33
#> V46           1.1979722170529673E51
#> V47                               0
#> V48          2.8129215445311952E265
#> V49                               0
#> V5                                0
#> V50                               0
#> V51                               0
#> V52                        Infinity
#> V53          4.0627010249816973E257
#> V54                               0
#> V55                               0
#> V56                        Infinity
#> V57                               0
#> V58                        Infinity
#> V59                        Infinity
#> V6                       60287.1823
#> V60                               0
#> V7                                0
#> V8                                0
#> V9            2.7228968820057472E59
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3188406