Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                        1797.7367
#> V10                        -71.8675
#> V11                        401.8333
#> V12                       -123.0427
#> V13                        -26.1066
#> V14                         28.7201
#> V15                         67.4508
#> V16                        -195.177
#> V17                        -41.0808
#> V18                        123.3849
#> V19                       -160.0055
#> V2                        -910.3865
#> V20                        177.8028
#> V21                        -68.4722
#> V22                         30.6411
#> V23                          8.3241
#> V24                         48.4007
#> V25                        -43.4034
#> V26                          4.7038
#> V27                        -41.6153
#> V28                          15.944
#> V29                        -92.9037
#> V3                        -348.2202
#> V30                        225.8803
#> V31                       -287.9569
#> V32                        157.4548
#> V33                         73.2959
#> V34                       -313.0504
#> V35                         234.612
#> V36                       -112.9563
#> V37                        -84.1013
#> V38                          83.322
#> V39                        -55.3632
#> V4                         223.8742
#> V40                        -78.2139
#> V41                        166.4226
#> V42                       -147.9013
#> V43                          19.771
#> V44                         143.442
#> V45                        -30.1764
#> V46                       -158.3255
#> V47                        580.5383
#> V48                        -18.8536
#> V49                       1023.4753
#> V5                         156.0402
#> V50                      -2773.3421
#> V51                       1977.2226
#> V52                       -892.8046
#> V53                       1095.2665
#> V54                        955.3177
#> V55                       -336.0925
#> V56                       1463.0311
#> V57                         -813.71
#> V58                      -3481.3613
#> V59                       2677.4078
#> V6                          22.1718
#> V60                      -2368.0296
#> V7                         -65.7346
#> V8                        -472.5251
#> V9                         184.9916
#> Intercept                  -15.1472
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         Infinity
#> V10                               0
#> V11           3.265819365227805E174
#> V12                               0
#> V13                               0
#> V14           2.9715091979588486E12
#> V15           1.9656638212656323E29
#> V16                               0
#> V17                               0
#> V18            3.849427692251844E53
#> V19                               0
#> V2                                0
#> V20           1.6549398019108147E77
#> V21                               0
#> V22           2.0289986207961312E13
#> V23                       4121.9443
#> V24           1.0474895838301739E21
#> V25                               0
#> V26                        110.3605
#> V27                               0
#> V28                     8402405.406
#> V29                               0
#> V3                                0
#> V30           1.2547796516586303E98
#> V31                               0
#> V32           2.4085097802646783E68
#> V33            6.792190867971183E31
#> V34                               0
#> V35           7.774675759378915E101
#> V36                               0
#> V37                               0
#> V38            1.535602412376822E36
#> V39                               0
#> V4            1.6877888775691352E97
#> V40                               0
#> V41           1.8898303135822805E72
#> V42                               0
#> V43                  385873455.5914
#> V44           1.9772484312226949E62
#> V45                               0
#> V46                               0
#> V47          1.3322636856880672E252
#> V48                               0
#> V49                        Infinity
#> V5             5.853290879677922E67
#> V50                               0
#> V51                        Infinity
#> V52                               0
#> V53                        Infinity
#> V54                        Infinity
#> V55                               0
#> V56                        Infinity
#> V57                               0
#> V58                               0
#> V59                        Infinity
#> V6                  4257005497.9286
#> V60                               0
#> V7                                0
#> V8                                0
#> V9            2.1918996803714292E80
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3188406