Classification Random Forest SRC Learner
mlr_learners_classif.rfsrc.Rd
Random forest for classification.
Calls randomForestSRC::rfsrc()
from randomForestSRC.
Meta Information
Task type: “classif”
Predict Types: “response”, “prob”
Feature Types: “logical”, “integer”, “numeric”, “factor”
Required Packages: mlr3, mlr3extralearners, randomForestSRC
Parameters
Id | Type | Default | Levels | Range |
ntree | integer | 500 | \([1, \infty)\) | |
mtry | integer | - | \([1, \infty)\) | |
mtry.ratio | numeric | - | \([0, 1]\) | |
nodesize | integer | 15 | \([1, \infty)\) | |
nodedepth | integer | - | \([1, \infty)\) | |
splitrule | character | gini | gini, auc, entropy | - |
nsplit | integer | 10 | \([0, \infty)\) | |
importance | character | FALSE | FALSE, TRUE, none, permute, random, anti | - |
block.size | integer | 10 | \([1, \infty)\) | |
bootstrap | character | by.root | by.root, by.node, none, by.user | - |
samptype | character | swor | swor, swr | - |
samp | untyped | - | - | |
membership | logical | FALSE | TRUE, FALSE | - |
sampsize | untyped | - | - | |
sampsize.ratio | numeric | - | \([0, 1]\) | |
na.action | character | na.omit | na.omit, na.impute | - |
nimpute | integer | 1 | \([1, \infty)\) | |
ntime | integer | - | \([1, \infty)\) | |
cause | integer | - | \([1, \infty)\) | |
proximity | character | FALSE | FALSE, TRUE, inbag, oob, all | - |
distance | character | FALSE | FALSE, TRUE, inbag, oob, all | - |
forest.wt | character | FALSE | FALSE, TRUE, inbag, oob, all | - |
xvar.wt | untyped | - | - | |
split.wt | untyped | - | - | |
forest | logical | TRUE | TRUE, FALSE | - |
var.used | character | FALSE | FALSE, all.trees, by.tree | - |
split.depth | character | FALSE | FALSE, all.trees, by.tree | - |
seed | integer | - | \((-\infty, -1]\) | |
do.trace | logical | FALSE | TRUE, FALSE | - |
statistics | logical | FALSE | TRUE, FALSE | - |
get.tree | untyped | - | - | |
outcome | character | train | train, test | - |
ptn.count | integer | 0 | \([0, \infty)\) | |
cores | integer | 1 | \([1, \infty)\) | |
save.memory | logical | FALSE | TRUE, FALSE | - |
perf.type | character | - | gmean, misclass, brier, none | - |
case.depth | logical | FALSE | TRUE, FALSE | - |
Custom mlr3 parameters
mtry
: This hyperparameter can alternatively be set via the added hyperparametermtry.ratio
asmtry = max(ceiling(mtry.ratio * n_features), 1)
. Note thatmtry
andmtry.ratio
are mutually exclusive.sampsize
: This hyperparameter can alternatively be set via the added hyperparametersampsize.ratio
assampsize = max(ceiling(sampsize.ratio * n_obs), 1)
. Note thatsampsize
andsampsize.ratio
are mutually exclusive.cores
: This value is set as the optionrf.cores
during training and is set to 1 by default.
References
Breiman, Leo (2001). “Random Forests.” Machine Learning, 45(1), 5–32. ISSN 1573-0565, doi:10.1023/A:1010933404324 .
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerClassif
-> LearnerClassifRandomForestSRC
Methods
Method importance()
The importance scores are extracted from the model slot importance
, returned for
'all'.
Returns
Named numeric()
.
Examples
# Define the Learner
learner = mlr3::lrn("classif.rfsrc", importance = "TRUE")
print(learner)
#> <LearnerClassifRandomForestSRC:classif.rfsrc>: Random Forest
#> * Model: -
#> * Parameters: importance=TRUE
#> * Packages: mlr3, mlr3extralearners, randomForestSRC
#> * Predict Types: [response], prob
#> * Feature Types: logical, integer, numeric, factor
#> * Properties: importance, missings, multiclass, oob_error, twoclass,
#> weights
# Define a Task
task = mlr3::tsk("sonar")
# Create train and test set
ids = mlr3::partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> Sample size: 139
#> Frequency of class labels: 78, 61
#> Number of trees: 500
#> Forest terminal node size: 1
#> Average no. of terminal nodes: 16.938
#> No. of variables tried at each split: 8
#> Total no. of variables: 60
#> Resampling used to grow trees: swor
#> Resample size used to grow trees: 88
#> Analysis: RF-C
#> Family: class
#> Splitting rule: gini *random*
#> Number of random split points: 10
#> Imbalanced ratio: 1.2787
#> (OOB) Brier score: 0.13364492
#> (OOB) Normalized Brier score: 0.53457968
#> (OOB) AUC: 0.92076503
#> (OOB) Log-loss: 0.42073707
#> (OOB) PR-AUC: 0.89938638
#> (OOB) G-mean: 0.74304962
#> (OOB) Requested performance error: 0.22302158, 0.08974359, 0.39344262
#>
#> Confusion matrix:
#>
#> predicted
#> observed M R class.error
#> M 71 7 0.0897
#> R 24 37 0.3934
#>
#> (OOB) Misclassification rate: 0.2230216
print(learner$importance())
#> V48 V11 V9 V10 V47
#> 0.0530502491 0.0495091489 0.0387281895 0.0362589124 0.0349376353
#> V49 V12 V16 V36 V17
#> 0.0323350261 0.0259084639 0.0235855668 0.0235839795 0.0202936681
#> V18 V37 V51 V45 V34
#> 0.0192723302 0.0168973068 0.0151605251 0.0147004791 0.0126753389
#> V8 V35 V28 V43 V14
#> 0.0120572591 0.0117979817 0.0116667881 0.0116215283 0.0104822426
#> V21 V5 V22 V19 V6
#> 0.0103522199 0.0091767304 0.0091678281 0.0090369355 0.0088839185
#> V23 V15 V52 V39 V4
#> 0.0085607051 0.0084710734 0.0084645232 0.0084255466 0.0082507600
#> V46 V26 V40 V20 V44
#> 0.0081641352 0.0081598435 0.0078731914 0.0074020169 0.0068119094
#> V32 V59 V13 V3 V30
#> 0.0056810758 0.0053635414 0.0051153219 0.0051013505 0.0048137072
#> V27 V42 V33 V29 V50
#> 0.0047990510 0.0045346361 0.0043595052 0.0043541090 0.0034880424
#> V31 V25 V58 V7 V57
#> 0.0034742882 0.0033651633 0.0032215899 0.0030710304 0.0029345513
#> V38 V60 V54 V1 V55
#> 0.0025948453 0.0024937161 0.0024648405 0.0023272254 0.0020290762
#> V41 V53 V24 V56 V2
#> 0.0007342663 0.0004443511 0.0002686788 -0.0001323881 -0.0008654721
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.2318841