Skip to contents

LogitBoost with simple regression functions as base learners. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.simple_logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
Iinteger-\((-\infty, \infty)\)
SlogicalFALSETRUE, FALSE-
PlogicalFALSETRUE, FALSE-
Minteger-\((-\infty, \infty)\)
Hinteger50\((-\infty, \infty)\)
Wnumeric0\((-\infty, \infty)\)
AlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

Landwehr, Niels, Hall, Mark, Frank, Eibe (2005). “Logistic model trees.” Machine learning, 59(1), 161–205.

Sumner M, Frank E, Hall M (2005). “Speeding up Logistic Model Tree Induction.” In 9th European Conference on Principles and Practice of Knowledge Discovery in Databases, 675-683.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSimpleLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifSimpleLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifSimpleLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifSimpleLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.simple_logistic")
print(learner)
#> 
#> ── <LearnerClassifSimpleLogistic> (classif.simple_logistic): LogitBoost Based Lo
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> SimpleLogistic:
#> 
#> Class M :
#> -4.2 + 
#> [V1] * 13.62 +
#> [V10] * 1.52 +
#> [V11] * 3.89 +
#> [V12] * 3.43 +
#> [V14] * -0.97 +
#> [V15] * -0.71 +
#> [V16] * -2.48 +
#> [V19] * 0.86 +
#> [V20] * 0.32 +
#> [V21] * 1.22 +
#> [V23] * 0.91 +
#> [V24] * 1.17 +
#> [V26] * -1.51 +
#> [V28] * 1.31 +
#> [V3] * -7.92 +
#> [V31] * -1.81 +
#> [V32] * 1.02 +
#> [V36] * -1.39 +
#> [V37] * -1.24 +
#> [V39] * 0.49 +
#> [V4] * 12.05 +
#> [V44] * 1.88 +
#> [V45] * 1.63 +
#> [V49] * 12.06 +
#> [V5] * 1.99 +
#> [V50] * -13.08 +
#> [V52] * 55.86 +
#> [V54] * 33.66 +
#> [V55] * -55.25 +
#> [V57] * -22.89 +
#> [V59] * 22.33 +
#> [V8] * -1.34 +
#> [V9] * 2.07
#> 
#> Class R :
#> 4.2  + 
#> [V1] * -13.62 +
#> [V10] * -1.52 +
#> [V11] * -3.89 +
#> [V12] * -3.43 +
#> [V14] * 0.97 +
#> [V15] * 0.71 +
#> [V16] * 2.48 +
#> [V19] * -0.86 +
#> [V20] * -0.32 +
#> [V21] * -1.22 +
#> [V23] * -0.91 +
#> [V24] * -1.17 +
#> [V26] * 1.51 +
#> [V28] * -1.31 +
#> [V3] * 7.92 +
#> [V31] * 1.81 +
#> [V32] * -1.02 +
#> [V36] * 1.39 +
#> [V37] * 1.24 +
#> [V39] * -0.49 +
#> [V4] * -12.05 +
#> [V44] * -1.88 +
#> [V45] * -1.63 +
#> [V49] * -12.06 +
#> [V5] * -1.99 +
#> [V50] * 13.08 +
#> [V52] * -55.86 +
#> [V54] * -33.66 +
#> [V55] * 55.25 +
#> [V57] * 22.89 +
#> [V59] * -22.33 +
#> [V8] * 1.34 +
#> [V9] * -2.07
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2028986