Skip to contents

Regularized random forest for regression. Calls RRF::RRF() from RRF.

Dictionary

This Learner can be instantiated via lrn():

lrn("regr.RRF")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”

  • Feature Types: “integer”, “numeric”, “factor”

  • Required Packages: mlr3, mlr3extralearners, RRF

Parameters

IdTypeDefaultLevelsRange
ntreeinteger500\([1, \infty)\)
mtryinteger-\([1, \infty)\)
nodesizeinteger-\([1, \infty)\)
replacelogicalTRUETRUE, FALSE-
flagReginteger1\([0, \infty)\)
coefRegnumeric0.8\((-\infty, \infty)\)
feaIniuntyped--
corr.biaslogicalFALSETRUE, FALSE-
maxnodesinteger-\([1, \infty)\)
importancelogicalFALSETRUE, FALSE-
localImplogicalFALSETRUE, FALSE-
nPerminteger1\([1, \infty)\)
proximitylogicalFALSETRUE, FALSE-
oob.proxlogicalFALSETRUE, FALSE-
do.tracelogicalFALSETRUE, FALSE-
keep.inbaglogicalFALSETRUE, FALSE-
keep.forestlogicalTRUETRUE, FALSE-
stratauntyped--
sampsizeuntyped--
predict.alllogicalFALSETRUE, FALSE-
nodeslogicalFALSETRUE, FALSE-

References

Deng, Houtao, Runger, George (2012). “Feature selection via regularized trees.” In 2012 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE. doi:10.1109/IJCNN.2012.6252640 .

Deng, Houtao, Runger, George (2013). “Gene selection with guided regularized random forest.” Pattern Recognition, 46(12), 3483–3489. doi:10.1016/j.patcog.2013.05.021 .

See also

Author

awinterstetter

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrRRF

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method importance()

The importance scores are extracted from the slot importance.

Usage

LearnerRegrRRF$importance()

Returns

Named numeric().


Method oob_error()

OOB errors are extracted from the model slot mse.

Usage

LearnerRegrRRF$oob_error()

Returns

numeric(1).


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrRRF$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("regr.RRF")
print(learner)
#> 
#> ── <LearnerRegrRRF> (regr.RRF): Regularized Random Forest ──────────────────────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3, mlr3extralearners, and RRF
#> • Predict Types: [response]
#> • Feature Types: integer, numeric, and factor
#> • Encapsulation: none (fallback: -)
#> • Properties: importance and oob_error
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("mtcars")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> Call:
#>  RRF(formula = task$formula(), data = task$data()) 
#>                Type of random forest: regression
#>                      Number of trees: 500
#> No. of variables tried at each split: 3
#> 
#>           Mean of squared residuals: 7.789201
#>                     % Var explained: 81.25
print(learner$importance())
#>      disp        wt       cyl        hp      carb      drat      gear      qsec 
#> 206.70448 165.11618 138.55141 121.29424  56.86631  48.55964  24.26459  20.13651 
#>        vs        am 
#>  16.11049  11.93123 

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> regr.mse 
#> 2.907525