Classification H2O Deep Learning Learner
Source:R/learner_h2o_classif_deeplearning.R
mlr_learners_classif.h2o.deeplearning.RdClassification feed-forward multilayer artificial neural network learner.
Class h2o::h2o.deeplearning() from package h2o.
Parameters
| Id | Type | Default | Levels | Range |
| ignore_const_cols | logical | TRUE | TRUE, FALSE | - |
| score_each_iteration | logical | FALSE | TRUE, FALSE | - |
| balance_classes | logical | FALSE | TRUE, FALSE | - |
| class_sampling_factors | untyped | NULL | - | |
| max_after_balance_size | numeric | 5 | \((-\infty, \infty)\) | |
| checkpoint | untyped | NULL | - | |
| pretrained_autoencoder | untyped | NULL | - | |
| overwrite_with_best_model | logical | TRUE | TRUE, FALSE | - |
| standardize | logical | TRUE | TRUE, FALSE | - |
| activation | character | Rectifier | Rectifier, Tanh, TanhWithDropout, RectifierWithDropout, Maxout, MaxoutWithDropout | - |
| hidden | untyped | c(200L, 200L) | - | |
| epochs | numeric | 10 | \([1, \infty)\) | |
| train_samples_per_iteration | integer | -2 | \([-2, \infty)\) | |
| target_ratio_comm_to_comp | numeric | 0.05 | \((-\infty, \infty)\) | |
| seed | integer | -1 | \((-\infty, \infty)\) | |
| adaptive_rate | logical | TRUE | TRUE, FALSE | - |
| rho | numeric | 0.99 | \([0, \infty)\) | |
| epsilon | numeric | 1e-08 | \([1e-10, 1e-04]\) | |
| rate | numeric | 0.005 | \([0, 1]\) | |
| rate_annealing | numeric | 1e-06 | \([0, \infty)\) | |
| rate_decay | numeric | 1 | \([0, \infty)\) | |
| momentum_start | numeric | 0 | \((-\infty, \infty)\) | |
| momentum_ramp | numeric | 1e+06 | \((-\infty, \infty)\) | |
| momentum_stable | numeric | 0 | \((-\infty, \infty)\) | |
| nesterov_accelerated_gradient | logical | TRUE | TRUE, FALSE | - |
| input_dropout_ratio | numeric | 0 | \((-\infty, \infty)\) | |
| hidden_dropout_ratios | numeric | 0.5 | \((-\infty, \infty)\) | |
| l1 | numeric | 0 | \((-\infty, \infty)\) | |
| l2 | numeric | 0 | \((-\infty, \infty)\) | |
| max_w2 | numeric | 3.402823e+38 | \((-\infty, \infty)\) | |
| initial_weight_distribution | character | UniformAdaptive | UniformAdaptive, Uniform, Normal | - |
| initial_weight_scale | numeric | 1 | \((-\infty, \infty)\) | |
| loss | character | Automatic | Automatic, CrossEntropy, Quadratic | - |
| distribution | character | AUTO | AUTO, bernoulli, multinomial | - |
| score_interval | numeric | 5 | \((-\infty, \infty)\) | |
| score_training_samples | integer | 10000 | \((-\infty, \infty)\) | |
| score_validation_samples | integer | 0 | \((-\infty, \infty)\) | |
| score_duty_cycle | numeric | 0.1 | \((-\infty, \infty)\) | |
| classification_stop | numeric | 0 | \([-1, \infty)\) | |
| stopping_rounds | integer | 5 | \([0, \infty)\) | |
| stopping_metric | character | AUTO | AUTO, logloss, AUC, lift_top_group, misclassification, AUCPR, mean_per_class_error | - |
| stopping_tolerance | numeric | 0 | \([0, \infty)\) | |
| max_runtime_secs | numeric | 0 | \([0, \infty)\) | |
| score_validation_sampling | character | Uniform | Uniform, Stratified | - |
| diagnostics | logical | TRUE | TRUE, FALSE | - |
| fast_mode | logical | TRUE | TRUE, FALSE | - |
| force_load_balance | logical | TRUE | TRUE, FALSE | - |
| replicate_training_data | logical | TRUE | TRUE, FALSE | - |
| single_node_mode | logical | FALSE | TRUE, FALSE | - |
| shuffle_training_data | logical | FALSE | TRUE, FALSE | - |
| missing_values_handling | character | MeanImputation | MeanImputation, Skip | - |
| quiet_mode | logical | TRUE | TRUE, FALSE | - |
| autoencoder | logical | FALSE | TRUE, FALSE | - |
| sparse | logical | FALSE | TRUE, FALSE | - |
| average_activation | numeric | 0 | \((-\infty, \infty)\) | |
| sparsity_beta | numeric | 0 | \((-\infty, \infty)\) | |
| max_categorical_features | integer | NULL | \((-\infty, \infty)\) | |
| reproducible | logical | FALSE | TRUE, FALSE | - |
| export_weights_and_biases | logical | FALSE | TRUE, FALSE | - |
| mini_batch_size | integer | 1 | \((-\infty, \infty)\) | |
| categorical_encoding | character | AUTO | AUTO, Enum, OneHotInternal, OneHotExplicit, Binary, Eigen, LabelEncoder, SortByResponse, EnumLimited | - |
| elastic_averaging | logical | FALSE | TRUE, FALSE | - |
| elastic_averaging_moving_rate | numeric | 0.9 | \((-\infty, \infty)\) | |
| elastic_averaging_regularization | numeric | 0.001 | \((-\infty, \infty)\) | |
| export_checkpoints_dir | untyped | NULL | - | |
| auc_type | character | AUTO | AUTO, NONE, MACRO_OVR, WEIGHTED_OVR, MACRO_OVO, WEIGHTED_OVO | - |
| gainslift_bins | integer | -1 | \([-1, \infty)\) | |
| verbose | logical | FALSE | TRUE, FALSE | - |
References
Fryda T, LeDell E, Gill N, Aiello S, Fu A, Candel A, Click C, Kraljevic T, Nykodym T, Aboyoun P, Kurka M, Malohlava M, Poirier S, Wong W (2025). h2o: R Interface for the 'H2O' Scalable Machine Learning Platform. R package version 3.46.0.9, https://github.com/h2oai/h2o-3.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifH2ODeeplearning
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.h2o.deeplearning")
print(learner)
#>
#> ── <LearnerClassifH2ODeeplearning> (classif.h2o.deeplearning): H2O Deep Learning
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3, mlr3extralearners, and h2o
#> • Predict Types: [response] and prob
#> • Feature Types: integer, numeric, and factor
#> • Encapsulation: none (fallback: -)
#> • Properties: missings, multiclass, twoclass, and weights
#> • Other settings: use_weights = 'use'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Warning:
#> Your H2O cluster version is (2 years, 2 months and 9 days) old. There may be a newer version available.
#> Please download and install the latest version from: https://h2o-release.s3.amazonaws.com/h2o/latest_stable.html
print(learner$model)
#> Model Details:
#> ==============
#>
#> H2OBinomialModel: deeplearning
#> Model ID: DeepLearning_model_R_1772455678928_1
#> Status of Neuron Layers: predicting Class, 2-class classification, bernoulli distribution, CrossEntropy loss, 52,802 weights/biases, 632.9 KB, 1,390 training samples, mini-batch size 1
#> layer units type dropout l1 l2 mean_rate rate_rms momentum
#> 1 1 60 Input 0.00 % NA NA NA NA NA
#> 2 2 200 Rectifier 0.00 % 0.000000 0.000000 0.006835 0.004646 0.000000
#> 3 3 200 Rectifier 0.00 % 0.000000 0.000000 0.029023 0.111292 0.000000
#> 4 4 2 Softmax NA 0.000000 0.000000 0.002185 0.000890 0.000000
#> mean_weight weight_rms mean_bias bias_rms
#> 1 NA NA NA NA
#> 2 0.000109 0.091923 0.493855 0.009165
#> 3 -0.000741 0.069720 0.997626 0.005519
#> 4 0.001226 0.412305 0.000002 0.001697
#>
#>
#> H2OBinomialMetrics: deeplearning
#> ** Reported on training data. **
#> ** Metrics reported on full training frame **
#>
#> MSE: 0.00134771
#> RMSE: 0.03671116
#> LogLoss: 0.008448052
#> Mean Per-Class Error: 0
#> AUC: 1
#> AUCPR: 1
#> Gini: 1
#>
#> Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
#> M R Error Rate
#> M 69 0 0.000000 =0/69
#> R 0 70 0.000000 =0/70
#> Totals 69 70 0.000000 =0/139
#>
#> Maximum Metrics: Maximum metrics at their respective thresholds
#> metric threshold value idx
#> 1 max f1 0.930033 1.000000 54
#> 2 max f2 0.930033 1.000000 54
#> 3 max f0point5 0.930033 1.000000 54
#> 4 max accuracy 0.930033 1.000000 54
#> 5 max precision 1.000000 1.000000 0
#> 6 max recall 0.930033 1.000000 54
#> 7 max specificity 1.000000 1.000000 0
#> 8 max absolute_mcc 0.930033 1.000000 54
#> 9 max min_per_class_accuracy 0.930033 1.000000 54
#> 10 max mean_per_class_accuracy 0.930033 1.000000 54
#> 11 max tns 1.000000 69.000000 0
#> 12 max fns 1.000000 56.000000 0
#> 13 max fps 0.000000 69.000000 123
#> 14 max tps 0.930033 70.000000 54
#> 15 max tnr 1.000000 1.000000 0
#> 16 max fnr 1.000000 0.800000 0
#> 17 max fpr 0.000000 1.000000 123
#> 18 max tpr 0.930033 1.000000 54
#>
#> Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)`
#>
#>
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.2463768