Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         181.8665
#> V10                       -226.4479
#> V11                         69.8271
#> V12                          260.49
#> V13                       -157.3594
#> V14                        -33.7772
#> V15                        156.4698
#> V16                       -107.2818
#> V17                       -222.7861
#> V18                        169.4373
#> V19                        161.8917
#> V2                         459.6218
#> V20                        -87.8275
#> V21                        -69.5504
#> V22                        119.7858
#> V23                        -64.8941
#> V24                        125.6743
#> V25                        -38.0304
#> V26                        -42.0086
#> V27                         36.6926
#> V28                         -5.3925
#> V29                        -34.0563
#> V3                        -784.9055
#> V30                        118.0193
#> V31                       -240.1826
#> V32                        117.1008
#> V33                        108.3033
#> V34                       -136.4353
#> V35                         65.5109
#> V36                         -21.835
#> V37                       -168.6289
#> V38                        203.9153
#> V39                         67.6771
#> V4                            729.3
#> V40                       -189.4909
#> V41                          23.001
#> V42                         90.8787
#> V43                        -13.5428
#> V44                        -39.3715
#> V45                       -210.6286
#> V46                        426.0363
#> V47                        -107.772
#> V48                         26.5426
#> V49                        990.8876
#> V5                        -261.2747
#> V50                      -2769.0097
#> V51                       1583.9142
#> V52                        -170.952
#> V53                       2280.1428
#> V54                       -393.7476
#> V55                      -1588.1698
#> V56                      -1316.8365
#> V57                      -2084.7618
#> V58                       3300.9142
#> V59                        -66.1127
#> V6                          134.659
#> V60                       -610.1401
#> V7                        -404.9158
#> V8                        -166.0924
#> V9                         511.7027
#> Intercept                  -90.5275
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1             9.629351338772315E78
#> V10                               0
#> V11           2.1159559918702326E30
#> V12          1.3470238712536453E113
#> V13                               0
#> V14                               0
#> V15            8.994257043789435E67
#> V16                               0
#> V17                               0
#> V18            3.851978403720033E73
#> V19           2.0355304762523264E70
#> V2            4.085056327379162E199
#> V20                               0
#> V21                               0
#> V22           1.0527533234880356E52
#> V23                               0
#> V24            3.798865312411515E54
#> V25                               0
#> V26                               0
#> V27            8.617587544449364E15
#> V28                          0.0046
#> V29                               0
#> V3                                0
#> V30           1.7993824779217496E51
#> V31                               0
#> V32           7.1820229546972485E50
#> V33            1.085250749782517E47
#> V34                               0
#> V35           2.8250473853909606E28
#> V36                               0
#> V37                               0
#> V38            3.624738547177149E88
#> V39           2.4649123955659618E29
#> V4                         Infinity
#> V40                               0
#> V41                 9754459317.9876
#> V42            2.938527783430829E39
#> V43                               0
#> V44                               0
#> V45                               0
#> V46           1.059739292907342E185
#> V47                               0
#> V48            3.367491383453031E11
#> V49                        Infinity
#> V5                                0
#> V50                               0
#> V51                        Infinity
#> V52                               0
#> V53                        Infinity
#> V54                               0
#> V55                               0
#> V56                               0
#> V57                               0
#> V58                        Infinity
#> V59                               0
#> V6             3.031538750251319E58
#> V60                               0
#> V7                                0
#> V8                                0
#> V9           1.6969263498342597E222
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2318841