Skip to contents

LogitBoost with simple regression functions as base learners. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.simple_logistic")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
Iinteger-\((-\infty, \infty)\)
SlogicalFALSETRUE, FALSE-
PlogicalFALSETRUE, FALSE-
Minteger-\((-\infty, \infty)\)
Hinteger50\((-\infty, \infty)\)
Wnumeric0\((-\infty, \infty)\)
AlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

Landwehr, Niels, Hall, Mark, Frank, Eibe (2005). “Logistic model trees.” Machine learning, 59(1), 161–205.

Sumner M, Frank E, Hall M (2005). “Speeding up Logistic Model Tree Induction.” In 9th European Conference on Principles and Practice of Knowledge Discovery in Databases, 675-683.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSimpleLogistic

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifSimpleLogistic$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifSimpleLogistic$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifSimpleLogistic$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.simple_logistic")
print(learner)
#> 
#> ── <LearnerClassifSimpleLogistic> (classif.simple_logistic): LogitBoost Based Lo
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> SimpleLogistic:
#> 
#> Class M :
#> -1.84 + 
#> [V1] * 9.22 +
#> [V11] * 2.87 +
#> [V12] * 3.38 +
#> [V14] * -0.49 +
#> [V15] * -0.99 +
#> [V16] * -1.78 +
#> [V2] * 3.1  +
#> [V21] * 1.72 +
#> [V23] * 0.78 +
#> [V24] * 0.78 +
#> [V27] * -0.87 +
#> [V28] * -0.32 +
#> [V3] * -6.04 +
#> [V30] * 1.24 +
#> [V31] * -1.76 +
#> [V35] * -0.52 +
#> [V36] * -1.54 +
#> [V37] * -0.36 +
#> [V39] * 0.75 +
#> [V40] * -1.83 +
#> [V44] * 1.53 +
#> [V45] * 3.31 +
#> [V48] * 2.54 +
#> [V49] * 12.07 +
#> [V50] * -40.91 +
#> [V51] * 55.83 +
#> [V52] * 54.92 +
#> [V53] * 12.19 +
#> [V57] * -26.63 +
#> [V58] * 37.9 +
#> [V59] * 35.34 +
#> [V60] * -27.48 +
#> [V7] * -4.77 +
#> [V8] * -2.98
#> 
#> Class R :
#> 1.84 + 
#> [V1] * -9.22 +
#> [V11] * -2.87 +
#> [V12] * -3.38 +
#> [V14] * 0.49 +
#> [V15] * 0.99 +
#> [V16] * 1.78 +
#> [V2] * -3.1 +
#> [V21] * -1.72 +
#> [V23] * -0.78 +
#> [V24] * -0.78 +
#> [V27] * 0.87 +
#> [V28] * 0.32 +
#> [V3] * 6.04 +
#> [V30] * -1.24 +
#> [V31] * 1.76 +
#> [V35] * 0.52 +
#> [V36] * 1.54 +
#> [V37] * 0.36 +
#> [V39] * -0.75 +
#> [V40] * 1.83 +
#> [V44] * -1.53 +
#> [V45] * -3.31 +
#> [V48] * -2.54 +
#> [V49] * -12.07 +
#> [V50] * 40.91 +
#> [V51] * -55.83 +
#> [V52] * -54.92 +
#> [V53] * -12.19 +
#> [V57] * 26.63 +
#> [V58] * -37.9 +
#> [V59] * -35.34 +
#> [V60] * 27.48 +
#> [V7] * 4.77 +
#> [V8] * 2.98
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.1594203