Skip to contents

Random forest for classification. Calls randomForest::randomForest() from randomForest.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.randomForest")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, randomForest

Parameters

IdTypeDefaultLevelsRange
ntreeinteger500\([1, \infty)\)
mtryinteger-\([1, \infty)\)
replacelogicalTRUETRUE, FALSE-
classwtuntypedNULL-
cutoffuntyped--
stratauntyped--
sampsizeuntyped--
nodesizeinteger1\([1, \infty)\)
maxnodesinteger-\([1, \infty)\)
importancecharacterFALSEaccuracy, gini, none-
localImplogicalFALSETRUE, FALSE-
proximitylogicalFALSETRUE, FALSE-
oob.proxlogical-TRUE, FALSE-
norm.voteslogicalTRUETRUE, FALSE-
do.tracelogicalFALSETRUE, FALSE-
keep.forestlogicalTRUETRUE, FALSE-
keep.inbaglogicalFALSETRUE, FALSE-
predict.alllogicalFALSETRUE, FALSE-
nodeslogicalFALSETRUE, FALSE-

References

Breiman, Leo (2001). “Random Forests.” Machine Learning, 45(1), 5–32. ISSN 1573-0565, doi:10.1023/A:1010933404324 .

See also

Author

pat-s

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifRandomForest

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method importance()

The importance scores are extracted from the slot importance. Parameter 'importance' must be set to either "accuracy" or "gini".

Usage

LearnerClassifRandomForest$importance()

Returns

Named numeric().


Method oob_error()

OOB errors are extracted from the model slot err.rate.

Usage

LearnerClassifRandomForest$oob_error()

Returns

numeric(1).


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifRandomForest$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.randomForest", importance = "accuracy")
print(learner)
#> 
#> ── <LearnerClassifRandomForest> (classif.randomForest): Random Forest ──────────
#> • Model: -
#> • Parameters: importance=accuracy
#> • Packages: mlr3, mlr3extralearners, and randomForest
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: importance, multiclass, oob_error, twoclass, and weights
#> • Other settings: use_weights = 'use', predict_raw = 'FALSE'

# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> Call:
#>  randomForest(formula = formula, data = data, classwt = classwt,      cutoff = cutoff, importance = TRUE) 
#>                Type of random forest: classification
#>                      Number of trees: 500
#> No. of variables tried at each split: 7
#> 
#>         OOB estimate of  error rate: 19.42%
#> Confusion matrix:
#>    M  R class.error
#> M 60 11   0.1549296
#> R 16 52   0.2352941
print(learner$importance())
#>           V11           V12           V48           V13           V36 
#>  2.144880e-02  2.081558e-02  1.573542e-02  1.304245e-02  1.252391e-02 
#>           V10           V37            V9           V49           V21 
#>  1.150318e-02  1.093965e-02  1.075934e-02  9.456982e-03  7.564573e-03 
#>           V47           V28           V51           V44           V45 
#>  6.559029e-03  5.717482e-03  5.219095e-03  5.145782e-03  4.661065e-03 
#>           V20           V22           V46           V15           V19 
#>  4.267657e-03  4.059881e-03  3.314002e-03  2.818498e-03  2.787375e-03 
#>           V29           V27           V23           V32            V5 
#>  2.757171e-03  2.703906e-03  2.641552e-03  2.625965e-03  2.623210e-03 
#>            V4           V31           V16           V35           V17 
#>  2.300445e-03  2.199162e-03  2.149985e-03  2.088692e-03  1.981395e-03 
#>           V42            V1           V40           V24           V43 
#>  1.980217e-03  1.908649e-03  1.842983e-03  1.648738e-03  1.624590e-03 
#>           V60           V34           V52           V59           V38 
#>  1.504993e-03  1.335469e-03  1.273139e-03  1.261406e-03  1.244264e-03 
#>           V14            V8           V18           V50           V25 
#>  1.232676e-03  1.223070e-03  1.205424e-03  1.097373e-03  1.060139e-03 
#>           V33            V7           V56           V53           V54 
#>  8.799397e-04  8.196357e-04  5.438906e-04  5.291405e-04  5.240304e-04 
#>           V58           V30           V41           V26           V39 
#>  4.903876e-04  4.454708e-04  4.292731e-04  4.243554e-04  3.103571e-04 
#>            V3           V57            V2           V55            V6 
#> -9.848617e-06 -1.404735e-05 -3.859560e-05 -4.261768e-04 -6.883933e-04 

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.1594203