Classification SimpleLogistic Learner
Source:R/learner_RWeka_classif_simple_logistic.R
mlr_learners_classif.simple_logistic.Rd
LogitBoost with simple regression functions as base learners.
Calls RWeka::make_Weka_classifier()
from RWeka.
Custom mlr3 parameters
output_debug_info
:original id: output-debug-info
do_not_check_capabilities
:original id: do-not-check-capabilities
num_decimal_places
:original id: num-decimal-places
batch_size
:original id: batch-size
Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern
Parameters
Id | Type | Default | Levels | Range |
subset | untyped | - | - | |
na.action | untyped | - | - | |
I | integer | - | \((-\infty, \infty)\) | |
S | logical | FALSE | TRUE, FALSE | - |
P | logical | FALSE | TRUE, FALSE | - |
M | integer | - | \((-\infty, \infty)\) | |
H | integer | 50 | \((-\infty, \infty)\) | |
W | numeric | 0 | \((-\infty, \infty)\) | |
A | logical | FALSE | TRUE, FALSE | - |
output_debug_info | logical | FALSE | TRUE, FALSE | - |
do_not_check_capabilities | logical | FALSE | TRUE, FALSE | - |
num_decimal_places | integer | 2 | \([1, \infty)\) | |
batch_size | integer | 100 | \([1, \infty)\) | |
options | untyped | NULL | - |
References
Landwehr, Niels, Hall, Mark, Frank, Eibe (2005). “Logistic model trees.” Machine learning, 59(1), 161–205.
Sumner M, Frank E, Hall M (2005). “Speeding up Logistic Model Tree Induction.” In 9th European Conference on Principles and Practice of Knowledge Discovery in Databases, 675-683.
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerClassif
-> LearnerClassifSimpleLogistic
Methods
Inherited methods
mlr3::Learner$base_learner()
mlr3::Learner$configure()
mlr3::Learner$encapsulate()
mlr3::Learner$format()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$print()
mlr3::Learner$reset()
mlr3::Learner$selected_features()
mlr3::Learner$train()
mlr3::LearnerClassif$predict_newdata_fast()
Method marshal()
Marshal the learner's model.
Arguments
...
(any)
Additional arguments passed tomlr3::marshal_model()
.
Method unmarshal()
Unmarshal the learner's model.
Arguments
...
(any)
Additional arguments passed tomlr3::unmarshal_model()
.
Examples
# Define the Learner
learner = lrn("classif.simple_logistic")
print(learner)
#>
#> ── <LearnerClassifSimpleLogistic> (classif.simple_logistic): LogitBoost Based Lo
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> SimpleLogistic:
#>
#> Class M :
#> -1.84 +
#> [V1] * 9.22 +
#> [V11] * 2.87 +
#> [V12] * 3.38 +
#> [V14] * -0.49 +
#> [V15] * -0.99 +
#> [V16] * -1.78 +
#> [V2] * 3.1 +
#> [V21] * 1.72 +
#> [V23] * 0.78 +
#> [V24] * 0.78 +
#> [V27] * -0.87 +
#> [V28] * -0.32 +
#> [V3] * -6.04 +
#> [V30] * 1.24 +
#> [V31] * -1.76 +
#> [V35] * -0.52 +
#> [V36] * -1.54 +
#> [V37] * -0.36 +
#> [V39] * 0.75 +
#> [V40] * -1.83 +
#> [V44] * 1.53 +
#> [V45] * 3.31 +
#> [V48] * 2.54 +
#> [V49] * 12.07 +
#> [V50] * -40.91 +
#> [V51] * 55.83 +
#> [V52] * 54.92 +
#> [V53] * 12.19 +
#> [V57] * -26.63 +
#> [V58] * 37.9 +
#> [V59] * 35.34 +
#> [V60] * -27.48 +
#> [V7] * -4.77 +
#> [V8] * -2.98
#>
#> Class R :
#> 1.84 +
#> [V1] * -9.22 +
#> [V11] * -2.87 +
#> [V12] * -3.38 +
#> [V14] * 0.49 +
#> [V15] * 0.99 +
#> [V16] * 1.78 +
#> [V2] * -3.1 +
#> [V21] * -1.72 +
#> [V23] * -0.78 +
#> [V24] * -0.78 +
#> [V27] * 0.87 +
#> [V28] * 0.32 +
#> [V3] * 6.04 +
#> [V30] * -1.24 +
#> [V31] * 1.76 +
#> [V35] * 0.52 +
#> [V36] * 1.54 +
#> [V37] * 0.36 +
#> [V39] * -0.75 +
#> [V40] * 1.83 +
#> [V44] * -1.53 +
#> [V45] * -3.31 +
#> [V48] * -2.54 +
#> [V49] * -12.07 +
#> [V50] * 40.91 +
#> [V51] * -55.83 +
#> [V52] * -54.92 +
#> [V53] * -12.19 +
#> [V57] * 26.63 +
#> [V58] * -37.9 +
#> [V59] * -35.34 +
#> [V60] * 27.48 +
#> [V7] * 4.77 +
#> [V8] * 2.98
#>
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.1594203