Skip to contents

Fit a generalized linear regression model using a boosting algorithm. Calls mboost::glmboost() from mboost.

Dictionary

This Learner can be instantiated via lrn():

lrn("regr.glmboost")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”

  • Feature Types: “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, mboost

Parameters

IdTypeDefaultLevelsRange
familycharacterGaussianGaussian, Laplace, Huber, Poisson, GammaReg, NBinomial, Hurdle, custom-
custom.familyuntyped--
nuirangeuntypedc(0, 100)-
dnumericNULL\((-\infty, \infty)\)
centerlogicalTRUETRUE, FALSE-
mstopinteger100\((-\infty, \infty)\)
nunumeric0.1\((-\infty, \infty)\)
riskcharacterinbaginbag, oobag, none-
oobweightsuntypedNULL-
tracelogicalFALSETRUE, FALSE-
stopinternuntypedFALSE-
na.actionuntypedstats::na.omit-
contrasts.arguntyped--

Offset

If a Task contains a column with the offset role, it is automatically incorporated via the offset argument in mboost's training function. No offset is applied during prediction for this learner.

References

Bühlmann, Peter, Yu, Bin (2003). “Boosting with the L 2 loss: regression and classification.” Journal of the American Statistical Association, 98(462), 324–339.

See also

Author

be-marc

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrGLMBoost

Methods

Inherited methods


Method new()

Create a LearnerRegrGLMBoost object.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrGLMBoost$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("regr.glmboost")
print(learner)
#> <LearnerRegrGLMBoost:regr.glmboost>: Boosted Generalized Linear Model
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3extralearners, mboost
#> * Predict Types:  [response]
#> * Feature Types: integer, numeric, factor, ordered
#> * Properties: offset, weights

# Define a Task
task = mlr3::tsk("mtcars")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> 	 Generalized Linear Models Fitted via Gradient Boosting
#> 
#> Call:
#> glmboost.formula(formula = f, data = data, family = new("boost_family_glm",     fW = function (f)     return(rep(1, length = length(f))), ngradient = function (y,         f, w = 1)     y - f, risk = function (y, f, w = 1)     sum(w * loss(y, f), na.rm = TRUE), offset = function (x,         w, ...)     UseMethod("weighted.mean"), check_y = function (y)     {        if (!is.numeric(y) || !is.null(dim(y)))             stop("response is not a numeric vector but ", sQuote("family = Gaussian()"))        y    }, weights = function (w)     {        switch(weights, any = TRUE, none = isTRUE(all.equal(unique(w),             1)), zeroone = isTRUE(all.equal(unique(w + abs(w -             1)), 1)), case = isTRUE(all.equal(unique(w - floor(w)),             0)))    }, nuisance = function ()     return(NA), response = function (f)     f, rclass = function (f)     NA, name = "Squared Error (Regression)", charloss = "(y - f)^2 \n"),     control = ctrl)
#> 
#> 
#> 	 Squared Error (Regression) 
#> 
#> Loss function: (y - f)^2 
#>  
#> 
#> Number of boosting iterations: mstop = 100 
#> Step size:  0.1 
#> Offset:  21.00476 
#> 
#> Coefficients: 
#> (Intercept)          am        carb         cyl          hp        qsec 
#> 13.03985271  1.72951619 -0.13213496 -0.37517651 -0.02126749  0.22048206 
#>          vs          wt 
#> -0.47304443 -3.85453254 
#> attr(,"offset")
#> [1] 21.00476
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> regr.mse 
#>  7.06443