Skip to contents

Fit a generalized linear classification model using a boosting algorithm. Calls mboost::glmboost() from mboost.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.glmboost")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, mboost

Parameters

IdTypeDefaultLevelsRange
familycharacterBinomialBinomial, AdaExp, AUC, custom-
custom.familyuntyped--
linkcharacterlogitlogit, probit-
typecharacteradaboostglm, adaboost-
centerlogicalTRUETRUE, FALSE-
mstopinteger100\((-\infty, \infty)\)
nunumeric0.1\((-\infty, \infty)\)
riskcharacterinbaginbag, oobag, none-
oobweightsuntypedNULL-
tracelogicalFALSETRUE, FALSE-
stopinternuntypedFALSE-
na.actionuntypedstats::na.omit-
contrasts.arguntyped--

Offset

If a Task contains a column with the offset role, it is automatically incorporated via the offset argument in mboost's training function. No offset is applied during prediction for this learner.

References

Bühlmann, Peter, Yu, Bin (2003). “Boosting with the L 2 loss: regression and classification.” Journal of the American Statistical Association, 98(462), 324–339.

See also

Author

be-marc

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifGLMBoost

Methods

Inherited methods


Method new()

Create a LearnerClassifGLMBoost object.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifGLMBoost$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.glmboost")
print(learner)
#> 
#> ── <LearnerClassifGLMBoost> (classif.glmboost): Boosted Generalized Linear Model
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3, mlr3extralearners, and mboost
#> • Predict Types: [response] and prob
#> • Feature Types: integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: offset, twoclass, and weights
#> • Other settings: use_weights = 'use'

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> 	 Generalized Linear Models Fitted via Gradient Boosting
#> 
#> Call:
#> glmboost.formula(formula = f, data = data, family = new("boost_family_glm",     fW = function (f)     {        f <- pmin(abs(f), 36) * sign(f)        p <- exp(f)/(exp(f) + exp(-f))        4 * p * (1 - p)    }, ngradient = function (y, f, w = 1)     {        exp2yf <- exp(-2 * y * f)        -(-2 * y * exp2yf)/(log(2) * (1 + exp2yf))    }, risk = function (y, f, w = 1)     sum(w * loss(y, f), na.rm = TRUE), offset = function (y,         w)     {        p <- weighted.mean(y > 0, w)        1/2 * log(p/(1 - p))    }, check_y = function (y)     {        if (!is.factor(y))             stop("response is not a factor but ", sQuote("family = Binomial()"))        if (nlevels(y) != 2)             stop("response is not a factor at two levels but ",                 sQuote("family = Binomial()"))        return(c(-1, 1)[as.integer(y)])    }, weights = function (w)     {        switch(weights, any = TRUE, none = isTRUE(all.equal(unique(w),             1)), zeroone = isTRUE(all.equal(unique(w + abs(w -             1)), 1)), case = isTRUE(all.equal(unique(w - floor(w)),             0)))    }, nuisance = function ()     return(NA), response = function (f)     {        f <- pmin(abs(f), 36) * sign(f)        p <- exp(f)/(exp(f) + exp(-f))        return(p)    }, rclass = function (f)     (f > 0) + 1, name = "Negative Binomial Likelihood (logit link)",     charloss = c("{ \n", "    f <- pmin(abs(f), 36) * sign(f) \n",     "    p <- exp(f)/(exp(f) + exp(-f)) \n", "    y <- (y + 1)/2 \n",     "    -y * log(p) - (1 - y) * log(1 - p) \n", "} \n")), control = ctrl)
#> 
#> 
#> 	 Negative Binomial Likelihood (logit link) 
#> 
#> Loss function: { 
#>      f <- pmin(abs(f), 36) * sign(f) 
#>      p <- exp(f)/(exp(f) + exp(-f)) 
#>      y <- (y + 1)/2 
#>      -y * log(p) - (1 - y) * log(1 - p) 
#>  } 
#>  
#> 
#> Number of boosting iterations: mstop = 100 
#> Step size:  0.1 
#> Offset:  -0.06483891 
#> 
#> Coefficients: 
#> 
#> NOTE: Coefficients from a Binomial model are half the size of coefficients
#>  from a model fitted via glm(... , family = 'binomial').
#> See Warning section in ?coef.mboost
#> (Intercept)          V1         V12         V22         V23         V28 
#>   1.7580064 -13.0417601  -2.1581570  -0.1886050  -0.4668506  -0.2536112 
#>         V34         V36         V44         V45         V49         V51 
#>   0.4023615   0.6089511  -0.3820716  -0.7286233  -6.0193264  -1.7621887 
#>         V52         V54         V57          V7 
#> -12.9161145 -18.6720167  11.2007696   1.6442517 
#> attr(,"offset")
#> [1] -0.06483891
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3333333