Bayesian regularization for feed-forward neural networks. Calls brnn::brnn from brnn.
Meta Information
Task type: “regr”
Predict Types: “response”
Feature Types: “integer”, “numeric”
Required Packages: mlr3, mlr3extralearners, brnn
Parameters
Id | Type | Default | Levels | Range |
change | numeric | 0.001 | \((-\infty, \infty)\) | |
cores | integer | 1 | \([1, \infty)\) | |
epochs | integer | 1000 | \([1, \infty)\) | |
min_grad | numeric | 1e-10 | \((-\infty, \infty)\) | |
Monte_Carlo | logical | FALSE | TRUE, FALSE | - |
mu | numeric | 0.005 | \((-\infty, \infty)\) | |
mu_dec | numeric | 0.1 | \((-\infty, \infty)\) | |
mu_inc | numeric | 10 | \((-\infty, \infty)\) | |
mu_max | numeric | 1e+10 | \((-\infty, \infty)\) | |
neurons | integer | 2 | \([1, \infty)\) | |
normalize | logical | TRUE | TRUE, FALSE | - |
samples | integer | 40 | \([1, \infty)\) | |
tol | numeric | 1e-06 | \((-\infty, \infty)\) | |
verbose | logical | FALSE | TRUE, FALSE | - |
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerRegr
-> LearnerRegrBrnn
Methods
Inherited methods
mlr3::Learner$base_learner()
mlr3::Learner$configure()
mlr3::Learner$encapsulate()
mlr3::Learner$format()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$print()
mlr3::Learner$reset()
mlr3::Learner$selected_features()
mlr3::Learner$train()
mlr3::LearnerRegr$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("regr.brnn")
print(learner)
#>
#> ── <LearnerRegrBrnn> (regr.brnn): Bayesian regularization for feed-forward neura
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3, mlr3extralearners, and brnn
#> • Predict Types: [response]
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties:
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("mtcars")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Number of parameters (weights and biases) to estimate: 24
#> Nguyen-Widrow method
#> Scaling factor= 0.7234904
#> gamma= 7.5563 alpha= 3.0945 beta= 10.7761
print(learner$model)
#> A Bayesian regularized neural network
#> 10 - 2 - 1 with 24 weights, biases and connection strengths
#> Inputs and output were normalized
#> Training finished because Changes in F= beta*SCE + alpha*Ew in last 3 iterations less than 0.001
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> regr.mse
#> 8.801579