Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         901.1824
#> V10                        -162.995
#> V11                         83.5096
#> V12                        166.9887
#> V13                       -124.4179
#> V14                         71.2766
#> V15                        -94.0409
#> V16                        -55.0332
#> V17                         -7.9525
#> V18                        -33.0941
#> V19                        168.7483
#> V2                        -550.9161
#> V20                        -16.2135
#> V21                       -160.8317
#> V22                        110.9198
#> V23                         34.3077
#> V24                         80.1052
#> V25                         -86.281
#> V26                         24.3204
#> V27                         87.6375
#> V28                        -37.8438
#> V29                         -93.333
#> V3                         115.4549
#> V30                        174.4364
#> V31                       -359.7523
#> V32                        193.6794
#> V33                         98.5796
#> V34                        -70.5737
#> V35                        -21.5034
#> V36                         61.0176
#> V37                       -267.9315
#> V38                        125.8442
#> V39                        135.4389
#> V4                         408.2966
#> V40                         -99.168
#> V41                         -23.875
#> V42                        -21.8435
#> V43                        195.1498
#> V44                       -315.9099
#> V45                         90.4076
#> V46                        202.0263
#> V47                        228.7805
#> V48                         11.8106
#> V49                        945.8084
#> V5                         177.2177
#> V50                      -3435.6314
#> V51                       2000.2659
#> V52                       1214.6149
#> V53                        128.9777
#> V54                        125.1478
#> V55                       -657.4418
#> V56                        830.2534
#> V57                        150.8735
#> V58                      -1402.0417
#> V59                        467.4052
#> V6                         185.1458
#> V60                        133.7002
#> V7                         -63.4984
#> V8                          -249.45
#> V9                         470.5011
#> Intercept                  -142.244
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         Infinity
#> V10                               0
#> V11           1.8524621145434497E36
#> V12           3.3286624920175014E72
#> V13                               0
#> V14            9.016619752928251E30
#> V15                               0
#> V16                               0
#> V17                          0.0004
#> V18                               0
#> V19           1.9339286628524428E73
#> V2                                0
#> V20                               0
#> V21                               0
#> V22             1.48547045633874E48
#> V23            7.936634636466056E14
#> V24            6.155091140250947E34
#> V25                               0
#> V26            3.649270841099657E10
#> V27            1.149444436762069E38
#> V28                               0
#> V29                               0
#> V3            1.3849517738125687E50
#> V30           5.7115176099081604E75
#> V31                               0
#> V32            1.299840002684068E84
#> V33            6.495037630609116E42
#> V34                               0
#> V35                               0
#> V36           3.1593635566283634E26
#> V37                               0
#> V38           4.5022615293913675E54
#> V39            6.612593928340354E58
#> V4           2.0938635053567403E177
#> V40                               0
#> V41                               0
#> V42                               0
#> V43            5.655496180545953E84
#> V44                               0
#> V45           1.8345435254840942E39
#> V46            5.481675045413893E87
#> V47           2.2808526415610437E99
#> V48                      134666.945
#> V49                        Infinity
#> V5             9.218956399732878E76
#> V50                               0
#> V51                        Infinity
#> V52                        Infinity
#> V53           1.0334831174352236E56
#> V54           2.2439708069062713E54
#> V55                               0
#> V56                        Infinity
#> V57           3.3382746950768042E65
#> V58                               0
#> V59           9.806185481240175E202
#> V6            2.5574622415631266E80
#> V60           1.1621281850386296E58
#> V7                                0
#> V8                                0
#> V9           2.1679236360677775E204
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2608696