Skip to contents

Accelerated oblique random classification forest. Calls aorsf::orsf() from aorsf. Note that although the learner has the property "missing" and it can in principle deal with missing values, the behaviour has to be configured using the parameter na_action.

Initial parameter values

  • n_thread: This parameter is initialized to 1 (default is 0) to avoid conflicts with the mlr3 parallelization.

  • pred_simplify has to be TRUE, otherwise response is NA in prediction

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.aorsf")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, aorsf

Parameters

IdTypeDefaultLevelsRange
attach_datalogicalTRUETRUE, FALSE-
epsilonnumeric1e-09\([0, \infty)\)
importancecharacteranovanone, anova, negate, permute-
importance_max_pvaluenumeric0.01\([1e-04, 0.9999]\)
leaf_min_eventsinteger1\([1, \infty)\)
leaf_min_obsinteger5\([1, \infty)\)
max_iterinteger20\([1, \infty)\)
methodcharacterglmglm, net, pca, random-
mtryintegerNULL\([1, \infty)\)
mtry_rationumeric-\([0, 1]\)
n_retryinteger3\([0, \infty)\)
n_splitinteger5\([1, \infty)\)
n_threadinteger-\([0, \infty)\)
n_treeinteger500\([1, \infty)\)
na_actioncharacterfailfail, impute_meanmode-
net_mixnumeric0.5\((-\infty, \infty)\)
oobaglogicalFALSETRUE, FALSE-
oobag_eval_everyintegerNULL\([1, \infty)\)
oobag_fununtypedNULL-
oobag_pred_typecharacterprobnone, leaf, prob, class-
pred_aggregatelogicalTRUETRUE, FALSE-
sample_fractionnumeric0.632\([0, 1]\)
sample_with_replacementlogicalTRUETRUE, FALSE-
scale_xlogicalFALSETRUE, FALSE-
split_min_eventsinteger5\([1, \infty)\)
split_min_obsinteger10\([1, \infty)\)
split_min_statnumericNULL\([0, \infty)\)
split_rulecharacterginigini, cstat-
target_dfintegerNULL\([1, \infty)\)
tree_seedsintegerNULL\([1, \infty)\)
verbose_progresslogicalFALSETRUE, FALSE-

See also

Author

annanzrv

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifObliqueRandomForest

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method oob_error()

OOB concordance error extracted from the model slot eval_oobag$stat_values

Usage

LearnerClassifObliqueRandomForest$oob_error()

Returns

numeric().


Method importance()

The importance scores are extracted from the model.

Usage

LearnerClassifObliqueRandomForest$importance()

Returns

Named numeric().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifObliqueRandomForest$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.aorsf")
print(learner)
#> 
#> ── <LearnerClassifObliqueRandomForest> (classif.aorsf): Oblique Random Forest Cl
#> • Model: -
#> • Parameters: n_thread=1
#> • Packages: mlr3, mlr3extralearners, and aorsf
#> • Predict Types: [response] and prob
#> • Feature Types: integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: importance, missings, multiclass, oob_error, twoclass, and
#> weights
#> • Other settings: use_weights = 'use', predict_raw = 'FALSE'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> ---------- Oblique random classification forest
#> 
#>      Linear combinations: Logistic regression
#>           N observations: 139
#>                N classes: 2
#>                  N trees: 500
#>       N predictors total: 60
#>    N predictors per node: 8
#>  Average leaves per tree: 4.73
#> Min observations in leaf: 5
#>           OOB stat value: 0.92
#>            OOB stat type: AUC-ROC
#>      Variable importance: anova
#> 
#> -----------------------------------------
print(learner$importance())
#>         V11         V52         V12         V10         V22         V36 
#> 0.436464088 0.427230047 0.357843137 0.355769231 0.320754717 0.312195122 
#>         V21         V49          V9         V45         V51         V23 
#> 0.296650718 0.295774648 0.289855072 0.289719626 0.266990291 0.244635193 
#>         V13          V1         V48         V46         V37         V44 
#> 0.242857143 0.236111111 0.229357798 0.225490196 0.222797927 0.205479452 
#>         V47         V20         V29          V4         V24         V35 
#> 0.204878049 0.192893401 0.172043011 0.157635468 0.146788991 0.146739130 
#>         V19         V43         V28          V2         V18         V57 
#> 0.144230769 0.142156863 0.142105263 0.137931034 0.120418848 0.117647059 
#>         V16         V31         V40         V34         V41         V14 
#> 0.112903226 0.109375000 0.109004739 0.100961538 0.091891892 0.091370558 
#>         V53         V30         V17         V15          V8         V25 
#> 0.089473684 0.084577114 0.079096045 0.075757576 0.071090047 0.070652174 
#>         V27          V5         V32         V54         V60         V33 
#> 0.069444444 0.066298343 0.065573770 0.064356436 0.061224490 0.061032864 
#>         V38         V42         V50         V26          V3          V7 
#> 0.056994819 0.052631579 0.051282051 0.042857143 0.039603960 0.037234043 
#>          V6         V39         V58         V55         V56         V59 
#> 0.035000000 0.026200873 0.025510204 0.019704433 0.009615385 0.000000000 

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2753623