Skip to contents

Multinomial Logistic Regression model with a ridge estimator. Calls RWeka::Logistic() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
ClogicalFALSETRUE, FALSE-
Rnumeric-\((-\infty, \infty)\)
Minteger-1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

le Cessie, S., van Houwelingen, J.C. (1992). “Ridge Estimators in Logistic Regression.” Applied Statistics, 41(1), 191-201.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.logistic")
print(learner)
#> 
#> ── <LearnerClassifLogistic> (classif.logistic): Multinomial Logistic Regression 
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Logistic Regression with ridge parameter of 1.0E-8
#> Coefficients...
#>                               Class
#> Variable                          M
#> ===================================
#> V1                         145.9626
#> V10                          -9.505
#> V11                        111.2012
#> V12                          40.558
#> V13                         84.9304
#> V14                        -95.7352
#> V15                        141.8762
#> V16                        -75.1307
#> V17                        -114.909
#> V18                         63.5426
#> V19                        -58.4675
#> V2                         263.2374
#> V20                        139.2299
#> V21                        -93.6772
#> V22                         66.9925
#> V23                         -55.335
#> V24                         129.899
#> V25                         32.9743
#> V26                       -213.8962
#> V27                        168.9475
#> V28                       -129.4938
#> V29                         34.3341
#> V3                        -764.5859
#> V30                        171.6513
#> V31                       -237.9931
#> V32                        155.5932
#> V33                       -188.8022
#> V34                           3.465
#> V35                        180.8555
#> V36                       -269.3279
#> V37                           29.78
#> V38                         51.6122
#> V39                        -11.2562
#> V4                         552.9343
#> V40                        -62.1795
#> V41                        118.9504
#> V42                       -139.0682
#> V43                        108.1323
#> V44                         17.2637
#> V45                         97.2016
#> V46                         76.4998
#> V47                       -251.3765
#> V48                        660.4129
#> V49                        235.1897
#> V5                        -200.4611
#> V50                      -1031.6592
#> V51                       -760.6011
#> V52                         195.475
#> V53                       1180.6475
#> V54                         908.486
#> V55                       -638.5453
#> V56                      -2430.9738
#> V57                        577.3188
#> V58                        -173.906
#> V59                       -261.7488
#> V6                         195.2641
#> V60                       1314.3438
#> V7                        -183.4768
#> V8                        -167.8563
#> V9                          27.1068
#> Intercept                  -15.4787
#> 
#> 
#> Odds Ratios...
#>                               Class
#> Variable                          M
#> ===================================
#> V1             2.458889340375313E63
#> V10                          0.0001
#> V11           1.9682864766158254E48
#> V12           4.1125189353350496E17
#> V13             7.67000735846015E36
#> V14                               0
#> V15           4.1309560467277696E61
#> V16                               0
#> V17                               0
#> V18            3.946570514012266E27
#> V19                               0
#> V2           2.1016450342018168E114
#> V20            2.929323776318597E60
#> V21                               0
#> V22           1.2430227693839348E29
#> V23                               0
#> V24           2.5966976434308703E56
#> V25            2.091942518357803E14
#> V26                               0
#> V27           2.3602052788634936E73
#> V28                               0
#> V29            8.149083125788002E14
#> V3                                0
#> V30           3.5254706707412666E74
#> V31                               0
#> V32           3.7432779575432055E67
#> V33                               0
#> V34                         31.9749
#> V35            3.503701426771226E78
#> V36                               0
#> V37            8.576064231859844E12
#> V38           2.5995145131603977E22
#> V39                               0
#> V4           1.3686636712711646E240
#> V40                               0
#> V41            4.565592285112584E51
#> V42                               0
#> V43             9.14644135336649E46
#> V44                   31442298.7579
#> V45           1.6372235280330353E42
#> V46           1.6727646640156683E33
#> V47                               0
#> V48           6.511638354656365E286
#> V49          1.3854524149235158E102
#> V5                                0
#> V50                               0
#> V51                               0
#> V52            7.828980340681813E84
#> V53                        Infinity
#> V54                        Infinity
#> V55                               0
#> V56                               0
#> V57           5.325537959846305E250
#> V58                               0
#> V59                               0
#> V6             6.340212021270217E84
#> V60                        Infinity
#> V7                                0
#> V8                                0
#> V9             5.919975407070596E11
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3043478