Skip to contents

Random forest for classification. Calls randomForestSRC::rfsrc() from randomForestSRC.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.rfsrc")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”

  • Required Packages: mlr3, mlr3extralearners, randomForestSRC

Parameters

IdTypeDefaultLevelsRange
ntreeinteger500\([1, \infty)\)
mtryinteger-\([1, \infty)\)
mtry.rationumeric-\([0, 1]\)
nodesizeinteger15\([1, \infty)\)
nodedepthinteger-\([1, \infty)\)
splitrulecharacterginigini, auc, entropy-
nsplitinteger10\([0, \infty)\)
importancecharacterFALSEFALSE, TRUE, none, permute, random, anti-
block.sizeinteger10\([1, \infty)\)
bootstrapcharacterby.rootby.root, by.node, none, by.user-
samptypecharactersworswor, swr-
sampuntyped--
membershiplogicalFALSETRUE, FALSE-
sampsizeuntyped--
sampsize.rationumeric-\([0, 1]\)
na.actioncharacterna.omitna.omit, na.impute-
nimputeinteger1\([1, \infty)\)
proximitycharacterFALSEFALSE, TRUE, inbag, oob, all-
distancecharacterFALSEFALSE, TRUE, inbag, oob, all-
forest.wtcharacterFALSEFALSE, TRUE, inbag, oob, all-
xvar.wtuntyped--
split.wtuntyped--
forestlogicalTRUETRUE, FALSE-
var.usedcharacterFALSEFALSE, all.trees-
split.depthcharacterFALSEFALSE, all.trees, by.tree-
seedinteger-\((-\infty, -1]\)
do.tracelogicalFALSETRUE, FALSE-
get.treeuntyped--
outcomecharactertraintrain, test-
ptn.countinteger0\([0, \infty)\)
coresinteger1\([1, \infty)\)
save.memorylogicalFALSETRUE, FALSE-
perf.typecharacter-gmean, misclass, brier, none-
case.depthlogicalFALSETRUE, FALSE-
marginal.xvaruntypedNULL-

Custom mlr3 parameters

  • mtry: This hyperparameter can alternatively be set via the added hyperparameter mtry.ratio as mtry = max(ceiling(mtry.ratio * n_features), 1). Note that mtry and mtry.ratio are mutually exclusive.

  • sampsize: This hyperparameter can alternatively be set via the added hyperparameter sampsize.ratio as sampsize = max(ceiling(sampsize.ratio * n_obs), 1). Note that sampsize and sampsize.ratio are mutually exclusive.

  • cores: This value is set as the option rf.cores during training and is set to 1 by default.

References

Breiman, Leo (2001). “Random Forests.” Machine Learning, 45(1), 5–32. ISSN 1573-0565, doi:10.1023/A:1010933404324 .

See also

Author

RaphaelS1

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifRandomForestSRC

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method importance()

The importance scores are extracted from the model slot importance, returned for 'all'.

Usage

LearnerClassifRandomForestSRC$importance()

Returns

Named numeric().


Method selected_features()

Selected features are extracted from the model slot var.used.

Note: Due to a known issue in randomForestSRC, enabling var.used = "all.trees" causes prediction to fail. Therefore, this setting should be used exclusively for feature selection purposes and not when prediction is required.

Usage

LearnerClassifRandomForestSRC$selected_features()

Returns

character().


Method oob_error()

OOB error extracted from the model slot err.rate.

Usage

LearnerClassifRandomForestSRC$oob_error()

Returns

numeric().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifRandomForestSRC$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.rfsrc", importance = "TRUE")
print(learner)
#> 
#> ── <LearnerClassifRandomForestSRC> (classif.rfsrc): Random Forest ──────────────
#> • Model: -
#> • Parameters: importance=TRUE
#> • Packages: mlr3, mlr3extralearners, and randomForestSRC
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, and factor
#> • Encapsulation: none (fallback: -)
#> • Properties: importance, missings, multiclass, oob_error, selected_features,
#> twoclass, and weights
#> • Other settings: use_weights = 'use'

# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#>                          Sample size: 139
#>            Frequency of class labels: M=78, R=61
#>                      Number of trees: 500
#>            Forest terminal node size: 1
#>        Average no. of terminal nodes: 16.756
#> No. of variables tried at each split: 8
#>               Total no. of variables: 60
#>        Resampling used to grow trees: swor
#>     Resample size used to grow trees: 88
#>                             Analysis: RF-C
#>                               Family: class
#>                       Splitting rule: gini *random*
#>        Number of random split points: 10
#>                     Imbalanced ratio: 1.2787
#>                    (OOB) Brier score: 0.1276609
#>         (OOB) Normalized Brier score: 0.51064358
#>                            (OOB) AUC: 0.93022278
#>                       (OOB) Log-loss: 0.41151847
#>                         (OOB) PR-AUC: 0.91439222
#>                         (OOB) G-mean: 0.81223825
#>    (OOB) Requested performance error: 0.16546763, 0.06410256, 0.29508197
#> 
#> Confusion matrix:
#> 
#>           predicted
#>   observed  M  R class.error
#>          M 73  5      0.0641
#>          R 18 43      0.2951
#> 
#>       (OOB) Misclassification rate: 0.1654676
#> 
#> Random-classifier baselines (uniform):
#>    Brier: 0.25   Normalized Brier: 1   Log-loss: 0.69314718
print(learner$importance())
#>           V11           V12           V48            V9           V47 
#>  0.0517911833  0.0484819552  0.0454445011  0.0383480010  0.0368381044 
#>           V49           V36           V13           V10           V39 
#>  0.0339968362  0.0304566953  0.0303333958  0.0287306538  0.0211612111 
#>           V28           V37           V52           V34           V17 
#>  0.0186257351  0.0181561779  0.0172395775  0.0156589317  0.0152211000 
#>           V15           V18           V45           V21           V30 
#>  0.0143358197  0.0139519465  0.0137724118  0.0135309933  0.0134083756 
#>           V46           V16           V51           V35           V59 
#>  0.0129522050  0.0126409711  0.0111781759  0.0111591796  0.0110445608 
#>            V5           V20           V27           V14           V29 
#>  0.0110170367  0.0107471603  0.0092968416  0.0082845908  0.0079744939 
#>           V31           V23           V57           V26           V32 
#>  0.0078613036  0.0078305163  0.0075419180  0.0069505727  0.0068215632 
#>            V6            V3           V33            V4           V19 
#>  0.0066616520  0.0060987605  0.0059509242  0.0058037578  0.0056948924 
#>           V22           V54           V56           V55            V8 
#>  0.0050888259  0.0049351875  0.0038047368  0.0037890885  0.0037588086 
#>           V38           V42           V24           V41            V7 
#>  0.0037534418  0.0030468359  0.0030458402  0.0024871981  0.0024799327 
#>           V40           V25            V2           V60            V1 
#>  0.0023221353  0.0023028447  0.0017519807  0.0010060745  0.0008674028 
#>           V44           V50           V43           V58           V53 
#>  0.0007289310  0.0004347508  0.0003056321  0.0003003113 -0.0005966681 

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2173913