Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                        -210.6102
#> V10                        158.2764
#> V11                         -11.989
#> V12                         101.081
#> V13                         63.6226
#> V14                        -71.7036
#> V15                        -14.9589
#> V16                        -42.3016
#> V17                        -23.2519
#> V18                         18.7591
#> V19                        -26.8799
#> V2                         464.1465
#> V20                        141.7128
#> V21                        -48.6594
#> V22                        -56.4851
#> V23                         33.3781
#> V24                         61.2441
#> V25                       -117.5506
#> V26                         38.6218
#> V27                         89.5567
#> V28                         -98.944
#> V29                        -45.1393
#> V3                       -1089.5278
#> V30                        258.2585
#> V31                       -326.1078
#> V32                         214.808
#> V33                        -33.3724
#> V34                       -112.2877
#> V35                         97.9076
#> V36                          8.0067
#> V37                       -210.5974
#> V38                        102.0396
#> V39                        146.9636
#> V4                         757.3353
#> V40                       -317.6469
#> V41                        176.5604
#> V42                       -137.7848
#> V43                         88.2309
#> V44                         42.7256
#> V45                        -12.4694
#> V46                         53.1604
#> V47                        108.5637
#> V48                        569.1482
#> V49                        -45.1769
#> V5                         -37.3986
#> V50                      -1955.8607
#> V51                       1104.9465
#> V52                       -557.0297
#> V53                        255.2771
#> V54                       1114.0137
#> V55                      -2115.1632
#> V56                       -833.9414
#> V57                        287.8531
#> V58                       1309.2136
#> V59                       1821.2783
#> V6                         292.7636
#> V60                       -735.1295
#> V7                        -416.3772
#> V8                         229.7717
#> V9                         -143.799
#> Intercept                  -52.9942
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                                0
#> V10            5.477384208030242E68
#> V11                               0
#> V12            7.923705079527661E43
#> V13            4.274951600964251E27
#> V14                               0
#> V15                               0
#> V16                               0
#> V17                               0
#> V18                  140276813.4369
#> V19                               0
#> V2           3.7691498170013705E201
#> V20           3.5080495685716924E61
#> V21                               0
#> V22                               0
#> V23           3.1328764055176394E14
#> V24            3.962392741810113E26
#> V25                               0
#> V26           5.9324909646800928E16
#> V27            7.834178668215762E38
#> V28                               0
#> V29                               0
#> V3                                0
#> V30          1.4462679477285923E112
#> V31                               0
#> V32           1.9494327507007613E93
#> V33                               0
#> V34                               0
#> V35            3.316979053120247E42
#> V36                       3000.9158
#> V37                               0
#> V38           2.0665629171524647E44
#> V39            6.690860827375204E63
#> V4                         Infinity
#> V40                               0
#> V41           4.7776645244952674E76
#> V42                               0
#> V43            2.080520367924673E38
#> V44           3.5931238155224361E18
#> V45                               0
#> V46           1.2225909081052067E23
#> V47           1.4080232422881551E47
#> V48          1.5063618603323377E247
#> V49                               0
#> V5                                0
#> V50                               0
#> V51                        Infinity
#> V52                               0
#> V53           7.335779965746066E110
#> V54                        Infinity
#> V55                               0
#> V56                               0
#> V57          1.0303726205994597E125
#> V58                        Infinity
#> V59                        Infinity
#> V6           1.3983211914472408E127
#> V60                               0
#> V7                                0
#> V8             6.145714955032225E99
#> V9                                0
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2753623