Skip to contents

Gradient Boosting Regression Algorithm. Calls gbm::gbm() from gbm.

Dictionary

This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn():

mlr_learners$get("regr.gbm")
lrn("regr.gbm")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”

  • Feature Types: “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, gbm

Parameters

IdTypeDefaultLevelsRange
distributioncharactergaussiangaussian, laplace, poisson, tdist-
n.treesinteger100\([1, \infty)\)
interaction.depthinteger1\([1, \infty)\)
n.minobsinnodeinteger10\([1, \infty)\)
shrinkagenumeric0.001\([0, \infty)\)
bag.fractionnumeric0.5\([0, 1]\)
train.fractionnumeric1\([0, 1]\)
cv.foldsinteger0\((-\infty, \infty)\)
keep.datalogicalFALSETRUE, FALSE-
verboselogicalFALSETRUE, FALSE-
n.coresinteger1\((-\infty, \infty)\)
var.monotoneuntyped--

Parameter changes

  • keep.data:

    • Actual default: TRUE

    • Adjusted default: FALSE

    • Reason for change: keep.data = FALSE saves memory during model fitting.

  • n.cores:

    • Actual default: NULL

    • Adjusted default: 1

    • Reason for change: Suppressing the automatic internal parallelization if cv.folds > 0.

References

Friedman, H J (2002). “Stochastic gradient boosting.” Computational statistics & data analysis, 38(4), 367--378.

See also

Author

be-marc

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrGBM

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method importance()

The importance scores are extracted by gbm::relative.influence() from the model.

Usage

LearnerRegrGBM$importance()

Returns

Named numeric().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrGBM$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

learner = mlr3::lrn("regr.gbm")
print(learner)
#> <LearnerRegrGBM:regr.gbm>: Gradient Boosting
#> * Model: -
#> * Parameters: keep.data=FALSE, n.cores=1
#> * Packages: mlr3, mlr3extralearners, gbm
#> * Predict Types:  [response]
#> * Feature Types: integer, numeric, factor, ordered
#> * Properties: importance, missings, weights

# available parameters:
learner$param_set$ids()
#>  [1] "distribution"      "n.trees"           "interaction.depth"
#>  [4] "n.minobsinnode"    "shrinkage"         "bag.fraction"     
#>  [7] "train.fraction"    "cv.folds"          "keep.data"        
#> [10] "verbose"           "n.cores"           "var.monotone"