Skip to contents

Gaussian process regression via GPfit::GP_fit() from GPfit.

Note

As the optimization routine assumes that the inputs are scaled to the unit hypercube, the input gets scaled for each variable by default. If this is not wanted, scale = FALSE`` has to be set. We replace the GPfit parameter corr = list(type = 'exponential',power = 1.95)to be separate parameterstypeandpower, in the case of corr = list(type = 'matern', nu = 0.5), the separate parameters are typeandmatern_nu_k = 0, and nu is computed by nu = (2 * matern_nu_k + 1) / 2 = 0.5`.

Dictionary

This Learner can be instantiated via lrn():

lrn("regr.gpfit")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”, “se”

  • Feature Types: “integer”, “numeric”

  • Required Packages: mlr3, mlr3extralearners, GPfit

Parameters

IdTypeDefaultLevelsRange
controluntypedNULL-
nug_thresnumeric-\([0, \infty)\)
tracelogical-TRUE, FALSE-
maxitinteger-\([1, \infty)\)
optim_startuntyped--
scalelogical-TRUE, FALSE-
typecharacterexponentialexponential, matern-
matern_nu_kinteger0\([0, \infty)\)
powernumeric1.95\([1, 2]\)

References

r format_bib("macdonald2015gpfit")

See also

Author

awinterstetter

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrGPfit

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrGPfit$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("regr.gpfit")
print(learner)
#> 
#> ── <LearnerRegrGPfit> (regr.gpfit): Gaussian Process (GPfit) ───────────────────
#> • Model: -
#> • Parameters: nug_thres=20, trace=FALSE, maxit=100, optim_start=<NULL>,
#> scale=TRUE
#> • Packages: mlr3, mlr3extralearners, and GPfit
#> • Predict Types: [response] and se
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties:
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("mtcars")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> $model
#> 
#> Number Of Observations: n = 21
#> Input Dimensions: d = 10
#> 
#> Correlation: Exponential (power = 1.95)
#> Correlation Parameters: 
#>     beta_hat.1 beta_hat.2 beta_hat.3 beta_hat.4 beta_hat.5 beta_hat.6
#> [1] -0.1595304  -8.785053        -10  -8.060125  -7.397818   0.362155
#>     beta_hat.7 beta_hat.8 beta_hat.9 beta_hat.10
#> [1]   1.366509  0.5593852  -8.378369   0.2543255
#> 
#> sigma^2_hat: [1] 38.45304
#> 
#> delta_lb(beta_hat): [1] 0
#> 
#> nugget threshold parameter: 20
#> 
#> 
#> $mlist
#> $mlist$scaled
#> [1] TRUE
#> 
#> $mlist$not_const
#>  [1] "am"   "carb" "cyl"  "disp" "drat" "gear" "hp"   "qsec" "vs"   "wt"  
#> 
#> $mlist$high
#>      am    carb     cyl    disp    drat    gear      hp    qsec      vs      wt 
#>   1.000   4.000   8.000 460.000   4.430   5.000 245.000  22.900   1.000   5.424 
#> 
#> $mlist$low
#>     am   carb    cyl   disp   drat   gear     hp   qsec     vs     wt 
#>  0.000  1.000  4.000 78.700  2.760  3.000 62.000 15.410  0.000  1.513 
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> regr.mse 
#>  9.88129