Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         727.3241
#> V10                       -205.0089
#> V11                        287.3926
#> V12                       -125.9558
#> V13                        190.7647
#> V14                        -32.7239
#> V15                        -19.2913
#> V16                        -83.5939
#> V17                        -66.8009
#> V18                        108.7629
#> V19                         43.5547
#> V2                         960.3945
#> V20                       -105.4852
#> V21                         35.2961
#> V22                          9.6406
#> V23                         54.0488
#> V24                        137.1211
#> V25                       -198.6339
#> V26                        -42.7922
#> V27                         98.8662
#> V28                         40.3219
#> V29                        -62.6193
#> V3                       -1703.7003
#> V30                         76.7544
#> V31                       -312.3554
#> V32                        227.1296
#> V33                        -60.3367
#> V34                         57.0698
#> V35                         43.2203
#> V36                       -200.8332
#> V37                         84.3662
#> V38                         65.2132
#> V39                        -44.6404
#> V4                         531.1252
#> V40                       -226.6617
#> V41                        210.7182
#> V42                        -29.9269
#> V43                        262.0964
#> V44                       -226.8734
#> V45                        -35.2664
#> V46                        153.8406
#> V47                        361.7336
#> V48                       -185.5599
#> V49                        865.2278
#> V5                        -251.5659
#> V50                      -2607.2632
#> V51                        878.7165
#> V52                       1889.4755
#> V53                        324.8998
#> V54                         81.2787
#> V55                       -279.1898
#> V56                         940.819
#> V57                       1978.0214
#> V58                       -589.2651
#> V59                       -155.9757
#> V6                         238.1316
#> V60                       -672.4713
#> V7                         -171.035
#> V8                        -211.1843
#> V9                         227.4977
#> Intercept                  -67.2787
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         Infinity
#> V10                               0
#> V11            6.50154573643597E124
#> V12                               0
#> V13            7.047582555441164E82
#> V14                               0
#> V15                               0
#> V16                               0
#> V17                               0
#> V18           1.7184175896978863E47
#> V19           8.2331543531672074E18
#> V2                         Infinity
#> V20                               0
#> V21            2.132504136444066E15
#> V22                      15376.7148
#> V23           2.9722099233608443E23
#> V24           3.5558170540946657E59
#> V25                               0
#> V26                               0
#> V27            8.650704075271349E42
#> V28           3.2478779834819859E17
#> V29                               0
#> V3                                0
#> V30           2.1577957394348164E33
#> V31                               0
#> V32           4.3767065328819796E98
#> V33                               0
#> V34            6.096655533479562E24
#> V35           5.8932417129522166E18
#> V36                               0
#> V37           4.3627370021969334E36
#> V38           2.0975898550055196E28
#> V39                               0
#> V4             4.62124629133135E230
#> V40                               0
#> V41            3.263939493052663E91
#> V42                               0
#> V43            6.71492082382858E113
#> V44                               0
#> V45                               0
#> V46           6.4880229989774545E66
#> V47           1.255746612710446E157
#> V48                               0
#> V49                        Infinity
#> V5                                0
#> V50                               0
#> V51                        Infinity
#> V52                        Infinity
#> V53          1.2652433213935182E141
#> V54           1.9900914190378953E35
#> V55                               0
#> V56                        Infinity
#> V57                        Infinity
#> V58                               0
#> V59                               0
#> V6           2.6255508369794266E103
#> V60                               0
#> V7                                0
#> V8                                0
#> V9             6.324228724744786E98
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3188406