Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         160.0802
#> V10                        165.8747
#> V11                        -50.7695
#> V12                        364.7873
#> V13                       -228.5022
#> V14                        151.3207
#> V15                        -51.6741
#> V16                        -45.8231
#> V17                         -8.7703
#> V18                         85.7334
#> V19                       -106.9552
#> V2                         343.7394
#> V20                        109.1605
#> V21                         29.7469
#> V22                        -62.9676
#> V23                          6.3828
#> V24                        120.1145
#> V25                         37.4109
#> V26                        -73.0894
#> V27                         83.7133
#> V28                         -91.383
#> V29                        -38.0101
#> V3                        -865.8062
#> V30                        264.9179
#> V31                       -339.9919
#> V32                        262.8845
#> V33                        -56.5373
#> V34                       -211.6357
#> V35                        261.6894
#> V36                       -109.8984
#> V37                       -145.7715
#> V38                         73.6573
#> V39                        133.6731
#> V4                         590.4142
#> V40                       -307.4821
#> V41                        181.2724
#> V42                       -117.2038
#> V43                        123.3279
#> V44                         91.4493
#> V45                       -195.3352
#> V46                        149.3538
#> V47                        229.6174
#> V48                        167.0209
#> V49                        633.0498
#> V5                         -40.4072
#> V50                       -974.6897
#> V51                       1069.7159
#> V52                        425.2131
#> V53                      -1901.4611
#> V54                       -478.8435
#> V55                      -2916.0049
#> V56                       1172.7228
#> V57                      -1469.0052
#> V58                        1481.591
#> V59                       2457.3896
#> V6                          84.6081
#> V60                      -2418.2878
#> V7                        -151.1801
#> V8                         -211.469
#> V9                         -54.1216
#> Intercept                  -133.673
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1            3.3260833710419706E69
#> V10           1.0926280187813116E72
#> V11                               0
#> V12            2.66146148171654E158
#> V13                               0
#> V14             5.22093781125351E65
#> V15                               0
#> V16                               0
#> V17                          0.0002
#> V18            1.712168312141236E37
#> V19                               0
#> V2           1.9236202553738922E149
#> V20           2.5574307368591502E47
#> V21            8.296551234157286E12
#> V22                               0
#> V23                         591.563
#> V24           1.4624005840803074E52
#> V25            1.767425222703903E16
#> V26                               0
#> V27            2.271140289338067E36
#> V28                               0
#> V29                               0
#> V3                                0
#> V30          1.1281743272632995E115
#> V31                               0
#> V32          1.4767216301516972E114
#> V33                               0
#> V34                               0
#> V35          4.4695455060704306E113
#> V36                               0
#> V37                               0
#> V38            9.748947621026848E31
#> V39           1.1310270274963285E58
#> V4           2.5918612588917036E256
#> V40                               0
#> V41            5.316360748075023E78
#> V42                               0
#> V43            3.636032097197001E53
#> V44           5.1988834412323376E39
#> V45                               0
#> V46            7.303310878430343E64
#> V47            5.267123636904277E99
#> V48            3.437672877995919E72
#> V49           8.512200853063253E274
#> V5                                0
#> V50                               0
#> V51                        Infinity
#> V52            4.65285077941955E184
#> V53                               0
#> V54                               0
#> V55                               0
#> V56                        Infinity
#> V57                               0
#> V58                        Infinity
#> V59                        Infinity
#> V6             5.556853432746669E36
#> V60                               0
#> V7                                0
#> V8                                0
#> V9                                0
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2463768