Skip to contents

Random forests for blocks of clinical and omics covariate data. Calls blockForest::blockfor() from package blockForest.

In this learner, only the trained forest object ($forest) is retained. The optimized block-specific tuning parameters (paramvalues) and the biased OOB error estimate (biased_oob_error_donotuse) are discarded, as they are either not needed for downstream use or not reliable for performance estimation.

Initial parameter values

  • num.threads is initialized to 1 to avoid conflicts with parallelization via future.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.blockforest")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, blockForest

Parameters

IdTypeDefaultLevelsRange
blocksuntyped--
block.methodcharacterBlockForestBlockForest, RandomBlock, BlockVarSel, VarProb, SplitWeights-
num.treesinteger2000\([1, \infty)\)
mtryuntypedNULL-
nsetsinteger300\([1, \infty)\)
num.trees.preinteger1500\([1, \infty)\)
splitrulecharacterextratreesextratrees, gini-
always.select.blockinteger0\([0, 1]\)
importancecharacter-none, impurity, impurity_corrected, permutation-
num.threadsinteger-\([1, \infty)\)
seedintegerNULL\((-\infty, \infty)\)
verboselogicalTRUETRUE, FALSE-

References

Hornung, R., Wright, N. M (2019). “Block Forests: Random forests for blocks of clinical and omics covariate data.” BMC Bioinformatics, 20(1), 1–17. doi:10.1186/s12859-019-2942-y , https://doi.org/10.1186/s12859-019-2942-y.

See also

Author

bblodfon

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifBlockForest

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method importance()

The importance scores are extracted from the model slot variable.importance.

Usage

LearnerClassifBlockForest$importance()

Returns

Named numeric().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifBlockForest$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# check task's features
task$feature_names
#>  [1] "V1"  "V10" "V11" "V12" "V13" "V14" "V15" "V16" "V17" "V18" "V19" "V2" 
#> [13] "V20" "V21" "V22" "V23" "V24" "V25" "V26" "V27" "V28" "V29" "V3"  "V30"
#> [25] "V31" "V32" "V33" "V34" "V35" "V36" "V37" "V38" "V39" "V4"  "V40" "V41"
#> [37] "V42" "V43" "V44" "V45" "V46" "V47" "V48" "V49" "V5"  "V50" "V51" "V52"
#> [49] "V53" "V54" "V55" "V56" "V57" "V58" "V59" "V6"  "V60" "V7"  "V8"  "V9" 

# partition features to 2 blocks
blocks = list(bl1 = 1:42, bl2 = 43:60)

# define learner
learner = lrn("classif.blockforest", blocks = blocks,
              importance = "permutation", nsets = 10, predict_type = "prob",
              num.trees = 50, num.trees.pre = 10, splitrule = "gini")

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# feature importance
learner$importance()
#>           V10           V11           V48           V37            V9 
#>  0.0201235233  0.0171443524  0.0165728282  0.0150995356  0.0132233459 
#>           V49           V13           V21           V12           V36 
#>  0.0128807004  0.0122192337  0.0116425814  0.0114140101  0.0109169360 
#>           V16           V50           V45            V5           V15 
#>  0.0061212834  0.0056765600  0.0054896632  0.0048872481  0.0044994226 
#>           V43           V20           V26           V46           V17 
#>  0.0043137905  0.0040520271  0.0037389428  0.0033279207  0.0029979357 
#>           V28           V33           V34           V22           V44 
#>  0.0029698122  0.0028423699  0.0028395931  0.0027935761  0.0027252553 
#>           V41            V6           V59           V14           V47 
#>  0.0024762725  0.0023662723  0.0023449497  0.0021101190  0.0020371452 
#>           V18           V27           V58           V52           V54 
#>  0.0016199850  0.0014981942  0.0014700937  0.0014539933  0.0014014885 
#>           V40           V38           V35           V39           V57 
#>  0.0011754877  0.0011649440  0.0009910916  0.0009705966  0.0006698479 
#>           V42           V25            V4           V56            V7 
#>  0.0004982824  0.0002279726  0.0001986704  0.0001968406  0.0001354238 
#>           V30            V1           V24           V31           V32 
#>  0.0000000000 -0.0001890310 -0.0003147160 -0.0003835610 -0.0005463704 
#>           V51           V19            V2           V53           V29 
#> -0.0005650207 -0.0006497036 -0.0007695154 -0.0007815835 -0.0008728799 
#>            V3           V23            V8           V60           V55 
#> -0.0010091146 -0.0012800000 -0.0021433169 -0.0022658868 -0.0023194802 

# Make predictions for the test observations
pred = learner$predict(task, row_ids = ids$test)
pred
#> 
#> ── <PredictionClassif> for 69 observations: ────────────────────────────────────
#>  row_ids truth response    prob.M    prob.R
#>        1     R        M 0.5554286 0.4445714
#>        2     R        M 0.6223810 0.3776190
#>        4     R        R 0.4702143 0.5297857
#>      ---   ---      ---       ---       ---
#>      203     M        M 0.8205159 0.1794841
#>      206     M        M 0.7052222 0.2947778
#>      208     M        M 0.5022778 0.4977222

# Score the predictions
pred$score()
#> classif.ce 
#>  0.2028986