Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         430.4188
#> V10                        -207.542
#> V11                        -36.8244
#> V12                        187.6466
#> V13                        160.2577
#> V14                       -177.5421
#> V15                         81.1582
#> V16                         52.6178
#> V17                        -98.9765
#> V18                         16.0201
#> V19                        -52.0648
#> V2                         706.9684
#> V20                         12.3342
#> V21                         -2.9369
#> V22                        -13.3171
#> V23                         17.8848
#> V24                        153.1298
#> V25                        -19.3024
#> V26                          6.6506
#> V27                        -75.7406
#> V28                         48.2165
#> V29                        -51.2397
#> V3                       -1283.8264
#> V30                        260.1721
#> V31                       -239.1094
#> V32                         30.9125
#> V33                          14.746
#> V34                        -84.9064
#> V35                          97.565
#> V36                       -118.9496
#> V37                         -3.3772
#> V38                        -19.8678
#> V39                         33.6511
#> V4                         738.1223
#> V40                        -14.7007
#> V41                         26.3673
#> V42                       -114.5162
#> V43                         -0.8025
#> V44                         50.1757
#> V45                         32.1695
#> V46                         17.7075
#> V47                        156.2032
#> V48                       -258.0732
#> V49                       1378.3759
#> V5                         224.0998
#> V50                       -933.4831
#> V51                        138.8885
#> V52                        525.6492
#> V53                         37.3736
#> V54                       2149.7983
#> V55                      -4704.1855
#> V56                       -781.8598
#> V57                      -1202.5592
#> V58                       3948.0693
#> V59                      -2515.7884
#> V6                          76.4557
#> V60                        3474.872
#> V7                        -262.9931
#> V8                        -609.8633
#> V9                         560.5168
#> Intercept                 -104.4264
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1             8.48249739307589E186
#> V10                               0
#> V11                               0
#> V12           3.1181244581324017E81
#> V13            3.972264255957133E69
#> V14                               0
#> V15           1.7643214425479857E35
#> V16            7.105725364849832E22
#> V17                               0
#> V18                    9066718.9604
#> V19                               0
#> V2            1.077645767014419E307
#> V20                     227341.8576
#> V21                           0.053
#> V22                               0
#> V23                   58515685.1991
#> V24           3.1872990593817396E66
#> V25                               0
#> V26                        773.2405
#> V27                               0
#> V28             8.71319305398368E20
#> V29                               0
#> V3                                0
#> V30           9.802010113114669E112
#> V31                               0
#> V32            2.661620976288291E13
#> V33                    2535730.6696
#> V34                               0
#> V35           2.3547628455832208E42
#> V36                               0
#> V37                          0.0341
#> V38                               0
#> V39            4.116206343228763E14
#> V4                         Infinity
#> V40                               0
#> V41            2.825982499564386E11
#> V42                               0
#> V43                          0.4482
#> V44            6.180315369603796E21
#> V45             9.35458102811985E13
#> V46                   49008380.4038
#> V47            6.889834806731025E67
#> V48                               0
#> V49                        Infinity
#> V5            2.1149010671262717E97
#> V50                               0
#> V51           2.0822066460209816E60
#> V52          1.9343519389798554E228
#> V53           1.7027042944075582E16
#> V54                        Infinity
#> V55                               0
#> V56                               0
#> V57                               0
#> V58                        Infinity
#> V59                               0
#> V6             1.600659370097161E33
#> V60                        Infinity
#> V7                                0
#> V8                                0
#> V9           2.6875090360225555E243
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3043478