Regression Gaussian Process Learner (GPfit)
Source:R/learner_GPfit_regr_gpfit.R
mlr_learners_regr.gpfit.RdGaussian process regression via GPfit::GP_fit() from GPfit.
Note
As the optimization routine assumes that the inputs are scaled to the unit hypercube,
the input gets scaled for each variable by default.
If this is not wanted, scale = FALSE`` has to be set. We replace the GPfit parameter corr = list(type = 'exponential',power = 1.95)to be separate parameterstypeandpower, in the case of corr = list(type = 'matern', nu = 0.5), the separate parameters are typeandmatern_nu_k = 0, and nu is computed by nu = (2 * matern_nu_k + 1) / 2 = 0.5`.
Meta Information
Task type: “regr”
Predict Types: “response”, “se”
Feature Types: “integer”, “numeric”
Required Packages: mlr3, mlr3extralearners, GPfit
Parameters
| Id | Type | Default | Levels | Range |
| control | untyped | NULL | - | |
| nug_thres | numeric | - | \([0, \infty)\) | |
| trace | logical | - | TRUE, FALSE | - |
| maxit | integer | - | \([1, \infty)\) | |
| optim_start | untyped | - | - | |
| scale | logical | - | TRUE, FALSE | - |
| type | character | exponential | exponential, matern | - |
| matern_nu_k | integer | 0 | \([0, \infty)\) | |
| power | numeric | 1.95 | \([1, 2]\) |
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrGPfit
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerRegr$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("regr.gpfit")
print(learner)
#>
#> ── <LearnerRegrGPfit> (regr.gpfit): Gaussian Process (GPfit) ───────────────────
#> • Model: -
#> • Parameters: nug_thres=20, trace=FALSE, maxit=100, optim_start=<NULL>,
#> scale=TRUE
#> • Packages: mlr3, mlr3extralearners, and GPfit
#> • Predict Types: [response] and se
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties:
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("mtcars")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> $model
#>
#> Number Of Observations: n = 21
#> Input Dimensions: d = 10
#>
#> Correlation: Exponential (power = 1.95)
#> Correlation Parameters:
#> beta_hat.1 beta_hat.2 beta_hat.3 beta_hat.4 beta_hat.5 beta_hat.6
#> [1] -0.1595304 -8.785053 -10 -8.060125 -7.397818 0.362155
#> beta_hat.7 beta_hat.8 beta_hat.9 beta_hat.10
#> [1] 1.366509 0.5593852 -8.378369 0.2543255
#>
#> sigma^2_hat: [1] 38.45304
#>
#> delta_lb(beta_hat): [1] 0
#>
#> nugget threshold parameter: 20
#>
#>
#> $mlist
#> $mlist$scaled
#> [1] TRUE
#>
#> $mlist$not_const
#> [1] "am" "carb" "cyl" "disp" "drat" "gear" "hp" "qsec" "vs" "wt"
#>
#> $mlist$high
#> am carb cyl disp drat gear hp qsec vs wt
#> 1.000 4.000 8.000 460.000 4.430 5.000 245.000 22.900 1.000 5.424
#>
#> $mlist$low
#> am carb cyl disp drat gear hp qsec vs wt
#> 0.000 1.000 4.000 78.700 2.760 3.000 62.000 15.410 0.000 1.513
#>
#>
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> regr.mse
#> 9.88129