Logistic Regression with Exhaustive Search
Source:R/learner_ExhaustiveSearch_classif_glm.R
mlr_learners_classif.exhaustive_search.Rd
Logistic Regression, where the features used in the model are selected by running Exhaustive Search.
Calls ExhaustiveSearch::ExhaustiveSearch()
from ExhaustiveSearch.
Initial parameter values
family
:Actual default: NULL
Adjusted default: "binomial"
Reason for change: To comply with mlr3 architecture, we differentiate between classification and regression learners.
nThreads
:Actual default: NULL
Adjusted default: 1
Reason for change: Suppressing the automatic internal parallelization if
cv.folds
> 0.
quietly
:Actual default: FALSE
Adjusted default: TRUE
Reason for change: Suppression of constant printing to console
Meta Information
Task type: “classif”
Predict Types: “response”, “prob”
Feature Types: “logical”, “integer”, “numeric”
Required Packages: mlr3, mlr3extralearners, ExhaustiveSearch
Parameters
Id | Type | Default | Levels | Range |
family | character | - | gaussian, binomial | - |
performanceMeasure | character | - | MSE, AIC | - |
combsUpTo | integer | - | \([1, \infty)\) | |
nResults | integer | 5000 | \([1, \infty)\) | |
nThreads | integer | - | \([1, \infty)\) | |
testSetIDs | integer | - | \([1, \infty)\) | |
errorVal | untyped | -1L | - | |
quietly | logical | - | TRUE, FALSE | - |
checkLarge | logical | TRUE | TRUE, FALSE | - |
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerClassif
-> LearnerClassifExhaustiveSearch
Methods
Inherited methods
Examples
# define the learner
learner = lrn("classif.exhaustive_search", predict_type = "prob", combsUpTo = 3)
# define the task
# and subset to 3 features to speed up the example
tsk_sonar = tsk("sonar")$select(c("V1", "V2", "V3"))
# train the learner
learner$train(tsk_sonar)
# extract selected features
learner$selected_features()
#> [1] "V1"
# predict on training task
learner$predict(tsk_sonar)
#>
#> ── <PredictionClassif> for 208 observations: ───────────────────────────────────
#> row_ids truth response prob.M prob.R
#> 1 R R 0.4753214 0.5246786
#> 2 R M 0.6664160 0.3335840
#> 3 R M 0.5237325 0.4762675
#> --- --- --- --- ---
#> 206 M M 0.7125284 0.2874716
#> 207 M M 0.5555589 0.4444411
#> 208 M M 0.5221729 0.4778271