Regression Abess Learner
mlr_learners_regr.abess.Rd
Adaptive best-subset selection for regression.
Calls abess::abess()
from abess.
Parameters
Id | Type | Default | Levels | Range |
family | character | gaussian | gaussian, mgaussian, poisson, gamma | - |
tune.path | character | sequence | sequence, gsection | - |
tune.type | character | gic | gic, aic, bic, ebic, cv | - |
normalize | integer | NULL | \((-\infty, \infty)\) | |
support.size | untyped | NULL | - | |
c.max | integer | 2 | \([1, \infty)\) | |
gs.range | untyped | NULL | - | |
lambda | numeric | 0 | \([0, \infty)\) | |
always.include | untyped | NULL | - | |
group.index | untyped | NULL | - | |
init.active.set | untyped | NULL | - | |
splicing.type | integer | 2 | \([1, 2]\) | |
max.splicing.iter | integer | 20 | \([1, \infty)\) | |
screening.num | integer | NULL | \([0, \infty)\) | |
important.search | integer | NULL | \([0, \infty)\) | |
warm.start | logical | TRUE | TRUE, FALSE | - |
nfolds | integer | 5 | \((-\infty, \infty)\) | |
foldid | untyped | NULL | - | |
cov.update | logical | FALSE | TRUE, FALSE | - |
newton | character | exact | exact, approx | - |
newton.thresh | numeric | 1e-06 | \([0, \infty)\) | |
max.newton.iter | integer | NULL | \([1, \infty)\) | |
early.stop | logical | FALSE | TRUE, FALSE | - |
ic.scale | numeric | 1 | \([0, \infty)\) | |
num.threads | integer | 0 | \([0, \infty)\) | |
seed | integer | 1 | \((-\infty, \infty)\) |
Initial parameter values
num.threads
: This parameter is initialized to 1 (default is 0) to avoid conflicts with the mlr3 parallelization.family
: Depends on the task type, if the parameterfamily
isNULL
, it is set to"binomial"
for binary classification tasks and to"multinomial"
for multiclass classification problems.
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerRegr
-> LearnerRegrAbess
Methods
Method selected_features()
Extract the name of selected features from the model by abess::extract()
.
Examples
# Define the Learner
learner = mlr3::lrn("regr.abess")
print(learner)
#> <LearnerRegrAbess:regr.abess>: Fast Best Subset Selection for Regression
#> * Model: -
#> * Parameters: num.threads=1
#> * Packages: mlr3, abess
#> * Predict Types: [response]
#> * Feature Types: integer, numeric
#> * Properties: selected_features, weights
# Define a Task
task = mlr3::tsk("mtcars")
# Create train and test set
ids = mlr3::partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> Call:
#> abess.default(x = task$data(cols = task$feature_names), y = as.matrix(task$data(cols = task$target_names)),
#> num.threads = 1L)
#>
#> support.size dev GIC
#> 1 0 14.556485 56.23877
#> 2 1 4.071672 32.04870
#> 3 2 3.336409 30.42993
#> 4 3 2.528853 27.17379
#> 5 4 2.171111 26.53430
#> 6 5 1.857696 25.82392
#> 7 6 1.841072 28.19872
#> 8 7 1.807328 30.37383
#> 9 8 1.800001 32.85209
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> regr.mse
#> 13.18668