Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         588.4381
#> V10                        126.4013
#> V11                         19.8476
#> V12                          5.0635
#> V13                        227.1628
#> V14                        -16.9197
#> V15                          5.8891
#> V16                         -22.266
#> V17                       -146.6318
#> V18                        152.1724
#> V19                          46.226
#> V2                         651.6435
#> V20                       -172.5285
#> V21                        160.0815
#> V22                        -53.6425
#> V23                          5.1131
#> V24                        157.3624
#> V25                       -120.4925
#> V26                        -46.3645
#> V27                        142.1278
#> V28                        -24.1988
#> V29                       -136.8608
#> V3                        -1200.115
#> V30                         253.008
#> V31                       -296.3741
#> V32                         77.8916
#> V33                        126.9658
#> V34                       -161.0387
#> V35                        231.7239
#> V36                       -159.2392
#> V37                        -88.5723
#> V38                        108.0121
#> V39                         41.2502
#> V4                         302.1348
#> V40                       -163.2394
#> V41                         75.7203
#> V42                       -124.4949
#> V43                        247.5701
#> V44                       -167.0676
#> V45                        -68.2365
#> V46                         82.3651
#> V47                        332.4647
#> V48                        464.6537
#> V49                        433.8353
#> V5                        -121.4347
#> V50                      -2504.8152
#> V51                       -470.2585
#> V52                       2580.9529
#> V53                        140.2391
#> V54                       1434.3864
#> V55                      -2859.7562
#> V56                        -296.994
#> V57                       -320.3042
#> V58                       -207.4364
#> V59                         30.8091
#> V6                         401.0877
#> V60                         85.9065
#> V7                        -506.2465
#> V8                         -144.934
#> V9                          45.8879
#> Intercept                 -121.4893
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1            3.592765890271787E255
#> V10            7.859061821250272E54
#> V11                  416580190.1173
#> V12                        158.1384
#> V13           4.5243396767926894E98
#> V14                               0
#> V15                        361.0909
#> V16                               0
#> V17                               0
#> V18           1.2235968062660462E66
#> V19           1.1904531927892612E20
#> V2           1.0119804132818015E283
#> V20                               0
#> V21            3.330682084449483E69
#> V22                               0
#> V23                        166.1897
#> V24           2.1959817873280566E68
#> V25                               0
#> V26                               0
#> V27            5.312559820219403E61
#> V28                               0
#> V29                               0
#> V3                                0
#> V30           7.585685781585216E109
#> V31                               0
#> V32             6.72802820138429E33
#> V33           1.3821356408620893E55
#> V34                               0
#> V35           4.329127119257115E100
#> V36                               0
#> V37                               0
#> V38            8.110560161442364E46
#> V39           8.2175886729920909E17
#> V4           1.6424505570778678E131
#> V40                               0
#> V41            7.672078017542866E32
#> V42                               0
#> V43            3.29844351880788E107
#> V44                               0
#> V45                               0
#> V46            5.898136329294161E35
#> V47          2.4409782298937816E144
#> V48            6.25955177311281E201
#> V49          2.5838043600756703E188
#> V5                                0
#> V50                               0
#> V51                               0
#> V52                        Infinity
#> V53            8.036875904030293E60
#> V54                        Infinity
#> V55                               0
#> V56                               0
#> V57                               0
#> V58                               0
#> V59           2.4001679881699605E13
#> V6            1.549409892787236E174
#> V60            2.035773687096036E37
#> V7                                0
#> V8                                0
#> V9             8.488893933048491E19
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2898551