Classification Imbalanced Random Forest Src Learner
Source:R/learner_randomForestSRC_classif_imbalanced_rfsrc.R
mlr_learners_classif.imbalanced_rfsrc.Rd
Imbalanced Random forest for classification between two classes.
Calls randomForestSRC::imbalanced.rfsrc()
from from randomForestSRC.
Meta Information
Task type: “classif”
Predict Types: “response”, “prob”
Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”
Required Packages: mlr3, randomForestSRC
Parameters
Id | Type | Default | Levels | Range |
ntree | integer | 500 | \([1, \infty)\) | |
method | character | rfq | rfq, brf, standard | - |
block.size | integer | 10 | \([1, \infty)\) | |
fast | logical | FALSE | TRUE, FALSE | - |
ratio | numeric | - | \([0, 1]\) | |
mtry | integer | - | \([1, \infty)\) | |
mtry.ratio | numeric | - | \([0, 1]\) | |
nodesize | integer | 15 | \([1, \infty)\) | |
nodedepth | integer | - | \([1, \infty)\) | |
splitrule | character | gini | gini, auc, entropy | - |
nsplit | integer | 10 | \([0, \infty)\) | |
importance | character | FALSE | FALSE, TRUE, none, permute, random, anti | - |
bootstrap | character | by.root | by.root, by.node, none, by.user | - |
samptype | character | swor | swor, swr | - |
samp | untyped | - | - | |
membership | logical | FALSE | TRUE, FALSE | - |
sampsize | untyped | - | - | |
sampsize.ratio | numeric | - | \([0, 1]\) | |
na.action | character | na.omit | na.omit, na.impute | - |
nimpute | integer | 1 | \([1, \infty)\) | |
ntime | integer | - | \([1, \infty)\) | |
proximity | character | FALSE | FALSE, TRUE, inbag, oob, all | - |
distance | character | FALSE | FALSE, TRUE, inbag, oob, all | - |
forest.wt | character | FALSE | FALSE, TRUE, inbag, oob, all | - |
xvar.wt | untyped | - | - | |
split.wt | untyped | - | - | |
forest | logical | TRUE | TRUE, FALSE | - |
var.used | character | FALSE | FALSE, all.trees | - |
split.depth | character | FALSE | FALSE, all.trees | - |
seed | integer | - | \((-\infty, -1]\) | |
do.trace | logical | FALSE | TRUE, FALSE | - |
get.tree | untyped | - | - | |
outcome | character | train | train, test | - |
ptn.count | integer | 0 | \([0, \infty)\) | |
cores | integer | 1 | \([1, \infty)\) | |
save.memory | logical | FALSE | TRUE, FALSE | - |
perf.type | character | - | gmean, misclass, brier, none | - |
case.depth | logical | FALSE | TRUE, FALSE | - |
marginal.xvar | untyped | NULL | - |
Custom mlr3 parameters
mtry
: This hyperparameter can alternatively be set via the added hyperparametermtry.ratio
asmtry = max(ceiling(mtry.ratio * n_features), 1)
. Note thatmtry
andmtry.ratio
are mutually exclusive.sampsize
: This hyperparameter can alternatively be set via the added hyperparametersampsize.ratio
assampsize = max(ceiling(sampsize.ratio * n_obs), 1)
. Note thatsampsize
andsampsize.ratio
are mutually exclusive.cores
: This value is set as the optionrf.cores
during training and is set to 1 by default.
References
O’Brien R, Ishwaran H (2019). “A random forests quantile classifier for class imbalanced data.” Pattern Recognition, 90, 232–249. doi:10.1016/j.patcog.2019.01.036 .
Chao C, Leo B (2004). “Using Random Forest to Learn Imbalanced Data.” University of California, Berkeley.
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerClassif
-> LearnerClassifImbalancedRandomForestSRC
Methods
Inherited methods
Method importance()
The importance scores are extracted from the slot importance
.
Returns
Named numeric()
.
Method selected_features()
Selected features are extracted from the model slot var.used
.
Note: Due to a known issue in randomForestSRC
, enabling var.used = "all.trees"
causes prediction to fail. Therefore, this setting should be used exclusively
for feature selection purposes and not when prediction is required.
Examples
# Define the Learner
learner = lrn("classif.imbalanced_rfsrc", importance = "TRUE")
print(learner)
#>
#> ── <LearnerClassifImbalancedRandomForestSRC> (classif.imbalanced_rfsrc): Imbalan
#> • Model: -
#> • Parameters: importance=TRUE
#> • Packages: mlr3 and randomForestSRC
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: importance, missings, oob_error, selected_features, twoclass, and
#> weights
#> • Other settings: use_weights = 'use'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> Sample size: 139
#> Frequency of class labels: M=74, R=65
#> Number of trees: 3000
#> Forest terminal node size: 1
#> Average no. of terminal nodes: 17.7307
#> No. of variables tried at each split: 8
#> Total no. of variables: 60
#> Resampling used to grow trees: swor
#> Resample size used to grow trees: 88
#> Analysis: RFQ
#> Family: class
#> Splitting rule: auc *random*
#> Number of random split points: 10
#> Imbalanced ratio: 1.1385
#> (OOB) Brier score: 0.14799709
#> (OOB) Normalized Brier score: 0.59198835
#> (OOB) AUC: 0.90031185
#> (OOB) Log-loss: 0.46306217
#> (OOB) PR-AUC: 0.89063736
#> (OOB) G-mean: 0.80280175
#> (OOB) Requested performance error: 0.19719825
#>
#> Confusion matrix:
#>
#> predicted
#> observed M R class.error
#> M 62 12 0.1622
#> R 15 50 0.2308
#>
#> (OOB) Misclassification rate: 0.1942446
#>
#> Random-classifier baselines (uniform):
#> Brier: 0.25 Normalized Brier: 1 Log-loss: 0.69314718
print(learner$importance())
#> V36 V27 V16 V18 V42 V10
#> 0.012002262 0.009901861 0.008068564 0.008068564 0.008068564 0.003633780
#> V35 V37 V49 V17 V26 V33
#> 0.003633780 0.003633780 0.003633780 0.001685062 0.001685062 0.001685062
#> V47 V52 V54 V55 V59 V12
#> 0.001685062 0.001685062 0.001685062 0.001685062 0.001685062 0.000000000
#> V14 V15 V19 V2 V31 V6
#> 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000 0.000000000
#> V51 V11 V13 V24 V29 V32
#> -0.006319848 -0.006448310 -0.006448310 -0.006448310 -0.006448310 -0.006448310
#> V44 V5 V57 V7 V8 V3
#> -0.006448310 -0.006448310 -0.006448310 -0.006448310 -0.006448310 -0.007988274
#> V34 V39 V40 V41 V53 V56
#> -0.007988274 -0.007988274 -0.007988274 -0.007988274 -0.007988274 -0.007988274
#> V58 V28 V22 V30 V38 V45
#> -0.007988274 -0.010931725 -0.012845644 -0.012845644 -0.012845644 -0.012845644
#> V1 V20 V25 V43 V46 V60
#> -0.014500748 -0.014500748 -0.014500748 -0.014500748 -0.014500748 -0.014500748
#> V4 V48 V21 V23 V50 V9
#> -0.015898608 -0.019193190 -0.020961738 -0.020961738 -0.020961738 -0.027372445
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.1449275