Skip to contents

Classification random forest learner. Calls h2o::h2o.randomForest() from package h2o.

H2O Connection

If no running H2O connection is found, the learner will automatically start a local H2O server on 127.0.0.1 via h2o::h2o.init(). If you want to connect to a remote H2O cluster, call h2o::h2o.init() with the appropriate arguments before training or predicting.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.h2o.randomForest")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “integer”, “numeric”, “factor”

  • Required Packages: mlr3, mlr3extralearners, h2o

Parameters

IdTypeDefaultLevelsRange
auc_typecharacterAUTOAUTO, NONE, MACRO_OVR, WEIGHTED_OVR, MACRO_OVO, WEIGHTED_OVO-
balance_classeslogicalFALSETRUE, FALSE-
binomial_double_treeslogicalFALSETRUE, FALSE-
build_tree_one_nodelogicalFALSETRUE, FALSE-
categorical_encodingcharacterAUTOAUTO, Enum, OneHotInternal, OneHotExplicit, Binary, Eigen, LabelEncoder, SortByResponse, EnumLimited-
check_constant_responselogicalTRUETRUE, FALSE-
checkpointuntypedNULL-
class_sampling_factorsuntypedNULL-
col_sample_rate_change_per_levelnumeric1\([0, 2]\)
col_sample_rate_per_treenumeric1\([0, 1]\)
export_checkpoints_diruntypedNULL-
gainslift_binsinteger-1\([-1, \infty)\)
histogram_typecharacterAUTOAUTO, UniformAdaptive, Random, QuantilesGlobal, RoundRobin, UniformRobust-
ignore_const_colslogicalTRUETRUE, FALSE-
max_after_balance_sizenumeric5\([0, \infty)\)
max_depthinteger20\([0, \infty)\)
max_runtime_secsnumeric0\([0, \infty)\)
min_rowsnumeric1\([1, \infty)\)
min_split_improvementnumeric1e-05\([0, \infty)\)
mtriesinteger-1\([1, \infty)\)
nbinsinteger20\([1, \infty)\)
nbins_catsinteger1024\([1, \infty)\)
nbins_top_levelinteger1024\([1, \infty)\)
ntreesinteger50\([1, \infty)\)
sample_ratenumeric0.632\([0, 1]\)
sample_rate_per_classuntypedNULL-
score_each_iterationlogicalFALSETRUE, FALSE-
score_tree_intervalinteger0\([0, \infty)\)
seedinteger-1\((-\infty, \infty)\)
stopping_metriccharacterAUTOAUTO, logloss, AUC, AUCPR, lift_top_group, misclassification, mean_per_class_error-
stopping_roundsinteger0\([0, \infty)\)
stopping_tolerancenumeric0.001\([0, \infty)\)
verboselogicalFALSETRUE, FALSE-

References

Fryda T, LeDell E, Gill N, Aiello S, Fu A, Candel A, Click C, Kraljevic T, Nykodym T, Aboyoun P, Kurka M, Malohlava M, Poirier S, Wong W (2025). h2o: R Interface for the 'H2O' Scalable Machine Learning Platform. R package version 3.46.0.9, https://github.com/h2oai/h2o-3.

See also

Author

awinterstetter

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifH2ORandomForest

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifH2ORandomForest$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.h2o.randomForest")
print(learner)
#> 
#> ── <LearnerClassifH2ORandomForest> (classif.h2o.randomForest): H2O Random Forest
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3, mlr3extralearners, and h2o
#> • Predict Types: [response] and prob
#> • Feature Types: integer, numeric, and factor
#> • Encapsulation: none (fallback: -)
#> • Properties: missings, multiclass, twoclass, and weights
#> • Other settings: use_weights = 'use'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Model Details:
#> ==============
#> 
#> H2OBinomialModel: drf
#> Model ID:  DRF_model_R_1774260318250_58 
#> Model Summary: 
#>   number_of_trees number_of_internal_trees model_size_in_bytes min_depth
#> 1              50                       50               12879         5
#>   max_depth mean_depth min_leaves max_leaves mean_leaves
#> 1        10    7.04000          9         20    15.86000
#> 
#> 
#> H2OBinomialMetrics: drf
#> ** Reported on training data. **
#> ** Metrics reported on Out-Of-Bag training samples **
#> 
#> MSE:  0.121149
#> RMSE:  0.3480646
#> LogLoss:  0.3912794
#> Mean Per-Class Error:  0.1314628
#> AUC:  0.9257041
#> AUCPR:  0.9064304
#> Gini:  0.8514082
#> R^2:  0.5080455
#> 
#> Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
#>         M  R    Error     Rate
#> M      69  9 0.115385    =9/78
#> R       9 52 0.147541    =9/61
#> Totals 78 61 0.129496  =18/139
#> 
#> Maximum Metrics: Maximum metrics at their respective thresholds
#>                         metric threshold     value idx
#> 1                       max f1  0.476190  0.852459  44
#> 2                       max f2  0.277778  0.885886  68
#> 3                 max f0point5  0.538462  0.867925  39
#> 4                 max accuracy  0.476190  0.870504  44
#> 5                max precision  0.944444  1.000000   0
#> 6                   max recall  0.117647  1.000000  86
#> 7              max specificity  0.944444  1.000000   0
#> 8             max absolute_mcc  0.476190  0.737074  44
#> 9   max min_per_class_accuracy  0.458333  0.858974  47
#> 10 max mean_per_class_accuracy  0.476190  0.868537  44
#> 11                     max tns  0.944444 78.000000   0
#> 12                     max fns  0.944444 60.000000   0
#> 13                     max fps  0.000000 78.000000 100
#> 14                     max tps  0.117647 61.000000  86
#> 15                     max tnr  0.944444  1.000000   0
#> 16                     max fnr  0.944444  0.983607   0
#> 17                     max fpr  0.000000  1.000000 100
#> 18                     max tpr  0.117647  1.000000  86
#> 
#> Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)`
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>   0.173913