Skip to contents

Regularized random forest for regression. Calls RRF::RRF() from RRF.

Dictionary

This Learner can be instantiated via lrn():

lrn("regr.RRF")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”

  • Feature Types: “integer”, “numeric”, “factor”

  • Required Packages: mlr3, mlr3extralearners, RRF

Parameters

IdTypeDefaultLevelsRange
ntreeinteger500\([1, \infty)\)
mtryinteger-\([1, \infty)\)
nodesizeinteger-\([1, \infty)\)
replacelogicalTRUETRUE, FALSE-
flagReginteger1\([0, \infty)\)
coefRegnumeric0.8\((-\infty, \infty)\)
feaIniuntyped--
corr.biaslogicalFALSETRUE, FALSE-
maxnodesinteger-\([1, \infty)\)
importancelogicalFALSETRUE, FALSE-
localImplogicalFALSETRUE, FALSE-
nPerminteger1\([1, \infty)\)
proximitylogicalFALSETRUE, FALSE-
oob.proxlogicalFALSETRUE, FALSE-
do.tracelogicalFALSETRUE, FALSE-
keep.inbaglogicalFALSETRUE, FALSE-
keep.forestlogicalTRUETRUE, FALSE-
stratauntyped--
sampsizeuntyped--
predict.alllogicalFALSETRUE, FALSE-
nodeslogicalFALSETRUE, FALSE-

References

Deng, Houtao, Runger, George (2012). “Feature selection via regularized trees.” In 2012 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE. doi:10.1109/IJCNN.2012.6252640 .

Deng, Houtao, Runger, George (2013). “Gene selection with guided regularized random forest.” Pattern Recognition, 46(12), 3483–3489. doi:10.1016/j.patcog.2013.05.021 .

See also

Author

awinterstetter

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrRRF

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method importance()

The importance scores are extracted from the slot importance.

Usage

LearnerRegrRRF$importance()

Returns

Named numeric().


Method oob_error()

OOB errors are extracted from the model slot mse.

Usage

LearnerRegrRRF$oob_error()

Returns

numeric(1).


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrRRF$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("regr.RRF")
print(learner)
#> 
#> ── <LearnerRegrRRF> (regr.RRF): Regularized Random Forest ──────────────────────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3, mlr3extralearners, and RRF
#> • Predict Types: [response]
#> • Feature Types: integer, numeric, and factor
#> • Encapsulation: none (fallback: -)
#> • Properties: importance and oob_error
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("mtcars")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> Call:
#>  RRF(formula = task$formula(), data = task$data()) 
#>                Type of random forest: regression
#>                      Number of trees: 500
#> No. of variables tried at each split: 3
#> 
#>           Mean of squared residuals: 8.925245
#>                     % Var explained: 72.56
print(learner$importance())
#>        wt      disp       cyl        hp      drat      gear      carb        vs 
#> 159.05660 127.07380  89.39679  85.56903  70.75681  25.08973  20.99987  15.78369 
#>      qsec        am 
#>  15.00979  10.70654 

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> regr.mse 
#>  5.65315