Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         319.3332
#> V10                        -74.7087
#> V11                         207.929
#> V12                         79.2071
#> V13                         -98.872
#> V14                        -49.4444
#> V15                        104.2045
#> V16                       -134.3195
#> V17                        -22.5252
#> V18                       -105.6029
#> V19                        262.7829
#> V2                         434.8201
#> V20                         -82.898
#> V21                       -108.8323
#> V22                        116.2934
#> V23                        -49.2144
#> V24                        111.7183
#> V25                         11.0222
#> V26                        -89.6122
#> V27                        -13.4452
#> V28                         54.1401
#> V29                         -5.7445
#> V3                          -724.85
#> V30                         42.1274
#> V31                       -259.5857
#> V32                         146.563
#> V33                         -21.939
#> V34                         -0.0843
#> V35                         -8.6187
#> V36                       -102.0589
#> V37                        -74.9428
#> V38                         14.2864
#> V39                         72.4407
#> V4                         -44.9377
#> V40                        -80.1445
#> V41                         27.8329
#> V42                        -96.7029
#> V43                         39.5347
#> V44                         115.931
#> V45                         20.8368
#> V46                        329.8913
#> V47                       -507.5922
#> V48                        716.0669
#> V49                        317.1354
#> V5                        -121.9496
#> V50                      -1701.3264
#> V51                         60.9601
#> V52                       -405.7568
#> V53                       1850.3646
#> V54                         16.9405
#> V55                      -1056.6507
#> V56                        678.4459
#> V57                        -989.903
#> V58                        -90.8593
#> V59                        1454.156
#> V6                          70.4965
#> V60                        499.2775
#> V7                         -207.886
#> V8                        -243.5832
#> V9                         363.6627
#> Intercept                   13.0057
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1           4.8378647348710285E138
#> V10                               0
#> V11            2.006365229214723E90
#> V12            2.507222241308144E34
#> V13                               0
#> V14                               0
#> V15           1.8007428013770715E45
#> V16                               0
#> V17                               0
#> V18                               0
#> V19          1.3340163006094176E114
#> V2            6.918053496642291E188
#> V20                               0
#> V21                               0
#> V22            3.203043067049765E50
#> V23                               0
#> V24            3.301016736558894E48
#> V25                      61218.9628
#> V26                               0
#> V27                               0
#> V28           3.2565876009381434E23
#> V29                          0.0032
#> V3                                0
#> V30          1.97561771432147507E18
#> V31                               0
#> V32            4.482509976329836E63
#> V33                               0
#> V34                          0.9192
#> V35                          0.0002
#> V36                               0
#> V37                               0
#> V38                    1601416.1531
#> V39           2.8880376752525746E31
#> V4                                0
#> V40                               0
#> V41           1.2237293113067979E12
#> V42                               0
#> V43          1.47810542466908672E17
#> V44           2.2293191220391924E50
#> V45                 1120246571.0677
#> V46          1.8620445867483586E143
#> V47                               0
#> V48                        Infinity
#> V49           5.372178118021032E137
#> V5                                0
#> V50                               0
#> V51           2.9829314162008697E26
#> V52                               0
#> V53                        Infinity
#> V54                   22759697.9372
#> V55                               0
#> V56             4.4190739343785E294
#> V57                               0
#> V58                               0
#> V59                        Infinity
#> V6             4.132767385508287E30
#> V60           6.815021674762182E216
#> V7                                0
#> V8                                0
#> V9             8.64393995414494E157
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3043478