Skip to contents

Random forest for classification. Calls randomForest::randomForest() from randomForest.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.randomForest")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, randomForest

Parameters

IdTypeDefaultLevelsRange
ntreeinteger500\([1, \infty)\)
mtryinteger-\([1, \infty)\)
replacelogicalTRUETRUE, FALSE-
classwtuntypedNULL-
cutoffuntyped--
stratauntyped--
sampsizeuntyped--
nodesizeinteger1\([1, \infty)\)
maxnodesinteger-\([1, \infty)\)
importancecharacterFALSEaccuracy, gini, none-
localImplogicalFALSETRUE, FALSE-
proximitylogicalFALSETRUE, FALSE-
oob.proxlogical-TRUE, FALSE-
norm.voteslogicalTRUETRUE, FALSE-
do.tracelogicalFALSETRUE, FALSE-
keep.forestlogicalTRUETRUE, FALSE-
keep.inbaglogicalFALSETRUE, FALSE-
predict.alllogicalFALSETRUE, FALSE-
nodeslogicalFALSETRUE, FALSE-

References

Breiman, Leo (2001). “Random Forests.” Machine Learning, 45(1), 5–32. ISSN 1573-0565, doi:10.1023/A:1010933404324 .

See also

Author

pat-s

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifRandomForest

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method importance()

The importance scores are extracted from the slot importance. Parameter 'importance' must be set to either "accuracy" or "gini".

Usage

LearnerClassifRandomForest$importance()

Returns

Named numeric().


Method oob_error()

OOB errors are extracted from the model slot err.rate.

Usage

LearnerClassifRandomForest$oob_error()

Returns

numeric(1).


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifRandomForest$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.randomForest", importance = "accuracy")
print(learner)
#> 
#> ── <LearnerClassifRandomForest> (classif.randomForest): Random Forest ──────────
#> • Model: -
#> • Parameters: importance=accuracy
#> • Packages: mlr3, mlr3extralearners, and randomForest
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: importance, multiclass, oob_error, twoclass, and weights
#> • Other settings: use_weights = 'use'

# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> Call:
#>  randomForest(formula = formula, data = data, classwt = classwt,      cutoff = cutoff, importance = TRUE) 
#>                Type of random forest: classification
#>                      Number of trees: 500
#> No. of variables tried at each split: 7
#> 
#>         OOB estimate of  error rate: 20.14%
#> Confusion matrix:
#>    M  R class.error
#> M 57 13   0.1857143
#> R 15 54   0.2173913
print(learner$importance())
#>           V10           V11           V12            V9           V48 
#>  0.0236657562  0.0218849275  0.0196456409  0.0112775345  0.0112685116 
#>           V13           V51           V45           V28           V36 
#>  0.0099084587  0.0094799096  0.0090131344  0.0085073092  0.0082405354 
#>           V27           V46           V37           V49           V18 
#>  0.0061936899  0.0059476097  0.0058778123  0.0047354982  0.0041040657 
#>           V47           V44           V16           V17           V14 
#>  0.0040211414  0.0037086102  0.0036445987  0.0036314918  0.0028367118 
#>           V34           V15           V43           V30           V59 
#>  0.0027265327  0.0026681918  0.0025480003  0.0022866843  0.0021391976 
#>           V40            V1            V7           V52           V22 
#>  0.0019944001  0.0019747599  0.0018617517  0.0018192359  0.0017905172 
#>           V32           V29           V20           V35            V5 
#>  0.0017847207  0.0017035516  0.0016594961  0.0016307288  0.0014964652 
#>           V26           V23           V53           V19           V39 
#>  0.0014846096  0.0013648152  0.0012856937  0.0012751547  0.0012109509 
#>           V31           V55           V24           V21            V2 
#>  0.0011992791  0.0011295395  0.0011070465  0.0010759565  0.0009899498 
#>            V6           V41           V33           V25           V42 
#>  0.0009510100  0.0009345773  0.0009280118  0.0008412527  0.0007994251 
#>           V54           V60            V8            V3           V58 
#>  0.0006155597  0.0005396103  0.0005006996  0.0003126762  0.0002952348 
#>           V50           V56            V4           V57           V38 
#>  0.0001667621 -0.0000397199 -0.0001113896 -0.0004923574 -0.0005164539 

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.1449275