Skip to contents

Fit a generalized linear regression model using a boosting algorithm. Calls mboost::glmboost() from mboost.

Dictionary

This Learner can be instantiated via lrn():

lrn("regr.glmboost")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”

  • Feature Types: “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, mboost

Parameters

IdTypeDefaultLevelsRange
offsetnumericNULL\((-\infty, \infty)\)
familycharacterGaussianGaussian, Laplace, Huber, Poisson, GammaReg, NBinomial, Hurdle, custom-
custom.familyuntyped--
nuirangeuntypedc(0, 100)-
dnumericNULL\((-\infty, \infty)\)
centerlogicalTRUETRUE, FALSE-
mstopinteger100\((-\infty, \infty)\)
nunumeric0.1\((-\infty, \infty)\)
riskcharacterinbaginbag, oobag, none-
oobweightsuntypedNULL-
tracelogicalFALSETRUE, FALSE-
stopinternuntypedFALSE-
na.actionuntypedstats::na.omit-
contrasts.arguntyped--

References

Bühlmann, Peter, Yu, Bin (2003). “Boosting with the L 2 loss: regression and classification.” Journal of the American Statistical Association, 98(462), 324–339.

See also

Author

be-marc

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrGLMBoost

Methods

Inherited methods


Method new()

Create a LearnerRegrGLMBoost object.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrGLMBoost$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("regr.glmboost")
print(learner)
#> <LearnerRegrGLMBoost:regr.glmboost>: Boosted Generalized Linear Model
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3extralearners, mboost
#> * Predict Types:  [response]
#> * Feature Types: integer, numeric, factor, ordered
#> * Properties: weights

# Define a Task
task = mlr3::tsk("mtcars")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> 	 Generalized Linear Models Fitted via Gradient Boosting
#> 
#> Call:
#> glmboost.formula(formula = f, data = data, family = new("boost_family_glm",     fW = function (f)     return(rep(1, length = length(f))), ngradient = function (y,         f, w = 1)     y - f, risk = function (y, f, w = 1)     sum(w * loss(y, f), na.rm = TRUE), offset = function (x,         w, ...)     UseMethod("weighted.mean"), check_y = function (y)     {        if (!is.numeric(y) || !is.null(dim(y)))             stop("response is not a numeric vector but ", sQuote("family = Gaussian()"))        y    }, weights = function (w)     {        switch(weights, any = TRUE, none = isTRUE(all.equal(unique(w),             1)), zeroone = isTRUE(all.equal(unique(w + abs(w -             1)), 1)), case = isTRUE(all.equal(unique(w - floor(w)),             0)))    }, nuisance = function ()     return(NA), response = function (f)     f, rclass = function (f)     NA, name = "Squared Error (Regression)", charloss = "(y - f)^2 \n"),     control = ctrl)
#> 
#> 
#> 	 Squared Error (Regression) 
#> 
#> Loss function: (y - f)^2 
#>  
#> 
#> Number of boosting iterations: mstop = 100 
#> Step size:  0.1 
#> Offset:  20.59524 
#> 
#> Coefficients: 
#>  (Intercept)           am         carb          cyl         disp         drat 
#> 15.927816347  0.097644454 -0.685124229 -0.205646003  0.001155689  0.760123532 
#>         gear           hp           vs           wt 
#>  0.450091709 -0.022548818  0.468744789 -4.740016632 
#> attr(,"offset")
#> [1] 20.59524
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> regr.mse 
#> 16.08298