Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         141.7056
#> V10                       -157.8544
#> V11                         32.7798
#> V12                        156.6802
#> V13                        138.3751
#> V14                        -60.1789
#> V15                         66.5184
#> V16                          4.4752
#> V17                        -96.4999
#> V18                         90.3414
#> V19                       -121.1148
#> V2                         284.7882
#> V20                        -11.8564
#> V21                        167.1379
#> V22                        -78.0382
#> V23                          38.494
#> V24                         38.0942
#> V25                       -112.5763
#> V26                         36.9442
#> V27                         87.4352
#> V28                        -21.8588
#> V29                        -33.0733
#> V3                        -472.0053
#> V30                         175.052
#> V31                       -225.0902
#> V32                         65.5086
#> V33                        110.1974
#> V34                        -124.377
#> V35                        144.6884
#> V36                        -24.2608
#> V37                        -154.406
#> V38                         94.0993
#> V39                         30.0656
#> V4                         375.7445
#> V40                       -222.7081
#> V41                        207.4389
#> V42                        -80.0374
#> V43                        -98.6155
#> V44                        160.3393
#> V45                       -166.3885
#> V46                          99.671
#> V47                        355.5405
#> V48                         51.9521
#> V49                        494.7247
#> V5                         -119.831
#> V50                      -1585.2936
#> V51                         -11.327
#> V52                        568.7564
#> V53                        1611.869
#> V54                         231.858
#> V55                      -2779.7863
#> V56                       -620.2545
#> V57                       -194.3717
#> V58                        2694.286
#> V59                       1291.2497
#> V6                          18.3255
#> V60                        155.6319
#> V7                         -310.454
#> V8                         -72.8292
#> V9                         183.6977
#> Intercept                 -130.9084
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1             3.483102047143862E61
#> V10                               0
#> V11           1.7222636940154688E14
#> V12           1.1100364657452056E68
#> V13           1.2460882264751712E60
#> V14                               0
#> V15            7.736830180367892E28
#> V16                         87.8088
#> V17                               0
#> V18           1.7169913666529454E39
#> V19                               0
#> V2            4.807632105749887E123
#> V20                               0
#> V21            3.864228134364157E72
#> V22                               0
#> V23           5.2209198262605632E16
#> V24           3.5001058911778024E16
#> V25                               0
#> V26            1.108324861993998E16
#> V27            9.389387351385245E37
#> V28                               0
#> V29                               0
#> V3                                0
#> V30            1.057116512744492E76
#> V31                               0
#> V32           2.8184483738826143E28
#> V33             7.21298583941494E47
#> V34                               0
#> V35            6.876871823249925E62
#> V36                               0
#> V37                               0
#> V38             7.35881715414039E40
#> V39           1.1411249886841768E13
#> V4           1.5267414503875448E163
#> V40                               0
#> V41           1.2290986354592493E90
#> V42                               0
#> V43                               0
#> V44             4.30986189167004E69
#> V45                               0
#> V46           1.9344182396349647E43
#> V47           2.566071874360297E154
#> V48           3.6518514371077144E22
#> V49           7.181569280211731E214
#> V5                                0
#> V50                               0
#> V51                               0
#> V52          1.0180558926746328E247
#> V53                        Infinity
#> V54           4.950539297457088E100
#> V55                               0
#> V56                               0
#> V57                               0
#> V58                        Infinity
#> V59                        Infinity
#> V6                    90920985.4244
#> V60            3.890984251062763E67
#> V7                                0
#> V8                                0
#> V9              6.01009004676478E79
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3478261