Skip to contents

LogitBoost with simple regression functions as base learners. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.simple_logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
Iinteger-\((-\infty, \infty)\)
SlogicalFALSETRUE, FALSE-
PlogicalFALSETRUE, FALSE-
Minteger-\((-\infty, \infty)\)
Hinteger50\((-\infty, \infty)\)
Wnumeric0\((-\infty, \infty)\)
AlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

Landwehr, Niels, Hall, Mark, Frank, Eibe (2005). “Logistic model trees.” Machine learning, 59(1), 161–205.

Sumner M, Frank E, Hall M (2005). “Speeding up Logistic Model Tree Induction.” In 9th European Conference on Principles and Practice of Knowledge Discovery in Databases, 675-683.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSimpleLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifSimpleLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifSimpleLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifSimpleLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.simple_logistic")
print(learner)
#> 
#> ── <LearnerClassifSimpleLogistic> (classif.simple_logistic): LogitBoost Based Lo
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> SimpleLogistic:
#> 
#> Class M :
#> -4.29 + 
#> [V1] * 12.31 +
#> [V11] * 4.06 +
#> [V16] * -1.57 +
#> [V20] * 1.37 +
#> [V22] * 0.38 +
#> [V23] * 1.97 +
#> [V28] * 0.45 +
#> [V29] * 0.38 +
#> [V3] * -3.7 +
#> [V31] * -1.78 +
#> [V32] * 1.57 +
#> [V35] * -0.48 +
#> [V36] * -1.44 +
#> [V39] * 0.83 +
#> [V4] * 3.55 +
#> [V42] * 0.71 +
#> [V45] * 4.3  +
#> [V49] * 8.31 +
#> [V50] * -36.47 +
#> [V51] * 68.48 +
#> [V52] * 72.34 +
#> [V55] * -24.42 +
#> [V56] * 23.75 +
#> [V57] * -69.92 +
#> [V60] * -34.04 +
#> [V7] * -1.94
#> 
#> Class R :
#> 4.29 + 
#> [V1] * -12.31 +
#> [V11] * -4.06 +
#> [V16] * 1.57 +
#> [V20] * -1.37 +
#> [V22] * -0.38 +
#> [V23] * -1.97 +
#> [V28] * -0.45 +
#> [V29] * -0.38 +
#> [V3] * 3.7  +
#> [V31] * 1.78 +
#> [V32] * -1.57 +
#> [V35] * 0.48 +
#> [V36] * 1.44 +
#> [V39] * -0.83 +
#> [V4] * -3.55 +
#> [V42] * -0.71 +
#> [V45] * -4.3 +
#> [V49] * -8.31 +
#> [V50] * 36.47 +
#> [V51] * -68.48 +
#> [V52] * -72.34 +
#> [V55] * 24.42 +
#> [V56] * -23.75 +
#> [V57] * 69.92 +
#> [V60] * 34.04 +
#> [V7] * 1.94
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2898551