Skip to contents

Fit a generalized linear classification model using a boosting algorithm. Calls mboost::glmboost() from mboost.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.glmboost")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3extralearners, mboost

Parameters

IdTypeDefaultLevelsRange
familycharacterBinomialBinomial, AdaExp, AUC, custom-
custom.familyuntyped--
linkcharacterlogitlogit, probit-
typecharacteradaboostglm, adaboost-
centerlogicalTRUETRUE, FALSE-
mstopinteger100\((-\infty, \infty)\)
nunumeric0.1\((-\infty, \infty)\)
riskcharacterinbaginbag, oobag, none-
oobweightsuntypedNULL-
tracelogicalFALSETRUE, FALSE-
stopinternuntypedFALSE-
na.actionuntypedstats::na.omit-
contrasts.arguntyped--

Offset

If a Task contains a column with the offset role, it is automatically incorporated via the offset argument in mboost's training function. No offset is applied during prediction for this learner.

References

Bühlmann, Peter, Yu, Bin (2003). “Boosting with the L 2 loss: regression and classification.” Journal of the American Statistical Association, 98(462), 324–339.

See also

Author

be-marc

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifGLMBoost

Methods

Inherited methods


Method new()

Create a LearnerClassifGLMBoost object.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifGLMBoost$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.glmboost")
print(learner)
#> 
#> ── <LearnerClassifGLMBoost> (classif.glmboost): Boosted Generalized Linear Model
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3, mlr3extralearners, and mboost
#> • Predict Types: [response] and prob
#> • Feature Types: integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: offset, twoclass, and weights
#> • Other settings: use_weights = 'use'

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> 
#> 	 Generalized Linear Models Fitted via Gradient Boosting
#> 
#> Call:
#> glmboost.formula(formula = f, data = data, family = new("boost_family_glm",     fW = function (f)     {        f <- pmin(abs(f), 36) * sign(f)        p <- exp(f)/(exp(f) + exp(-f))        4 * p * (1 - p)    }, ngradient = function (y, f, w = 1)     {        exp2yf <- exp(-2 * y * f)        -(-2 * y * exp2yf)/(log(2) * (1 + exp2yf))    }, risk = function (y, f, w = 1)     sum(w * loss(y, f), na.rm = TRUE), offset = function (y,         w)     {        p <- weighted.mean(y > 0, w)        1/2 * log(p/(1 - p))    }, check_y = function (y)     {        if (!is.factor(y))             stop("response is not a factor but ", sQuote("family = Binomial()"))        if (nlevels(y) != 2)             stop("response is not a factor at two levels but ",                 sQuote("family = Binomial()"))        return(c(-1, 1)[as.integer(y)])    }, weights = function (w)     {        switch(weights, any = TRUE, none = isTRUE(all.equal(unique(w),             1)), zeroone = isTRUE(all.equal(unique(w + abs(w -             1)), 1)), case = isTRUE(all.equal(unique(w - floor(w)),             0)))    }, nuisance = function ()     return(NA), response = function (f)     {        f <- pmin(abs(f), 36) * sign(f)        p <- exp(f)/(exp(f) + exp(-f))        return(p)    }, rclass = function (f)     (f > 0) + 1, name = "Negative Binomial Likelihood (logit link)",     charloss = c("{ \n", "    f <- pmin(abs(f), 36) * sign(f) \n",     "    p <- exp(f)/(exp(f) + exp(-f)) \n", "    y <- (y + 1)/2 \n",     "    -y * log(p) - (1 - y) * log(1 - p) \n", "} \n")), control = ctrl)
#> 
#> 
#> 	 Negative Binomial Likelihood (logit link) 
#> 
#> Loss function: { 
#>      f <- pmin(abs(f), 36) * sign(f) 
#>      p <- exp(f)/(exp(f) + exp(-f)) 
#>      y <- (y + 1)/2 
#>      -y * log(p) - (1 - y) * log(1 - p) 
#>  } 
#>  
#> 
#> Number of boosting iterations: mstop = 100 
#> Step size:  0.1 
#> Offset:  -0.05040235 
#> 
#> Coefficients: 
#> 
#> NOTE: Coefficients from a Binomial model are half the size of coefficients
#>  from a model fitted via glm(... , family = 'binomial').
#> See Warning section in ?coef.mboost
#> (Intercept)          V1         V11         V12         V15         V16 
#>   1.6471269  -3.0912281  -2.5800616  -0.7015937   0.4822619   0.2717666 
#>         V21         V22         V28         V31         V36          V4 
#>  -0.2555646  -0.6314661  -0.3211433   0.2554035   0.8358715  -0.9337844 
#>         V40         V44         V45         V48         V49         V52 
#>   0.0595925  -0.7222719  -1.2700392  -0.3563289  -4.1983325  -6.9330571 
#>          V7 
#>   0.1603530 
#> attr(,"offset")
#> [1] -0.05040235
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2608696