Skip to contents

Bayesian regularization for feed-forward neural networks. Calls brnn::brnn from brnn.

Dictionary

This Learner can be instantiated via lrn():

lrn("regr.brnn")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”

  • Feature Types: “integer”, “numeric”

  • Required Packages: mlr3, mlr3extralearners, brnn

Parameters

IdTypeDefaultLevelsRange
changenumeric0.001\((-\infty, \infty)\)
coresinteger1\([1, \infty)\)
epochsinteger1000\([1, \infty)\)
min_gradnumeric1e-10\((-\infty, \infty)\)
Monte_CarlologicalFALSETRUE, FALSE-
munumeric0.005\((-\infty, \infty)\)
mu_decnumeric0.1\((-\infty, \infty)\)
mu_incnumeric10\((-\infty, \infty)\)
mu_maxnumeric1e+10\((-\infty, \infty)\)
neuronsinteger2\([1, \infty)\)
normalizelogicalTRUETRUE, FALSE-
samplesinteger40\([1, \infty)\)
tolnumeric1e-06\((-\infty, \infty)\)
verboselogicalFALSETRUE, FALSE-

See also

Author

annanzrv

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrBrnn

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrBrnn$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("regr.brnn")
print(learner)
#> 
#> ── <LearnerRegrBrnn> (regr.brnn): Bayesian regularization for feed-forward neura
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3, mlr3extralearners, and brnn
#> • Predict Types: [response]
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties:
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("mtcars")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Number of parameters (weights and biases) to estimate: 24 
#> Nguyen-Widrow method
#> Scaling factor= 0.7234904 
#> gamma= 7.5563 	 alpha= 3.0945 	 beta= 10.7761 

print(learner$model)
#> A Bayesian regularized neural network 
#> 10 - 2 - 1 with 24 weights, biases and connection strengths
#> Inputs and output were  normalized
#> Training finished because  Changes in F= beta*SCE + alpha*Ew in last 3 iterations less than 0.001 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> regr.mse 
#> 8.801579