Skip to contents

Stochastic Gradient Descent for learning various linear models. Calls RWeka::make_Weka_classifier() from RWeka.

Initial parameter values

  • F:

    • Has only 2 out of 5 original loss functions: 0 = hinge loss (SVM) and 1 = log loss (logistic regression) with 0 (hinge loss) still being the default

    • Reason for change: this learner should only contain loss functions appropriate for classification tasks

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.sgd")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
Fcharacter00, 1-
Lnumeric0.01\((-\infty, \infty)\)
Rnumeric1e-04\((-\infty, \infty)\)
Einteger500\((-\infty, \infty)\)
Cnumeric0.001\((-\infty, \infty)\)
Nlogical-TRUE, FALSE-
Mlogical-TRUE, FALSE-
Sinteger1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSGD

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifSGD$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifSGD$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifSGD$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.sgd")
print(learner)
#> 
#> ── <LearnerClassifSGD> (classif.sgd): Stochastic Gradient Descent ──────────────
#> • Model: -
#> • Parameters: F=0
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Loss function: Hinge loss (SVM)
#> 
#> Class = 
#> 
#>         -1.2879 (normalized) V1
#>  +       0.0127 (normalized) V10
#>  +      -1.1988 (normalized) V11
#>  +      -3.5554 (normalized) V12
#>  +      -0.3567 (normalized) V13
#>  +      -0.2014 (normalized) V14
#>  +      -0.1684 (normalized) V15
#>  +       1.7359 (normalized) V16
#>  +       3.4325 (normalized) V17
#>  +      -1.1491 (normalized) V18
#>  +      -2.1947 (normalized) V19
#>  +      -0.565  (normalized) V2
#>  +      -0.3345 (normalized) V20
#>  +      -0.5402 (normalized) V21
#>  +      -0.0779 (normalized) V22
#>  +      -0.9326 (normalized) V23
#>  +      -0.9637 (normalized) V24
#>  +       1.2667 (normalized) V25
#>  +       0.0086 (normalized) V26
#>  +      -0.2682 (normalized) V27
#>  +      -2.4621 (normalized) V28
#>  +      -0.1399 (normalized) V29
#>  +       1.0459 (normalized) V3
#>  +       0.2115 (normalized) V30
#>  +       3.3804 (normalized) V31
#>  +      -1.5709 (normalized) V32
#>  +      -1.4347 (normalized) V33
#>  +       0.97   (normalized) V34
#>  +      -2.7906 (normalized) V35
#>  +       1.0106 (normalized) V36
#>  +       3.102  (normalized) V37
#>  +      -1.4711 (normalized) V38
#>  +       1.582  (normalized) V39
#>  +      -1.9547 (normalized) V4
#>  +       0.9117 (normalized) V40
#>  +      -0.3413 (normalized) V41
#>  +       1.4251 (normalized) V42
#>  +      -0.0025 (normalized) V43
#>  +      -0.1842 (normalized) V44
#>  +      -2.1033 (normalized) V45
#>  +      -1.1752 (normalized) V46
#>  +      -0.7741 (normalized) V47
#>  +      -2.4428 (normalized) V48
#>  +      -3.6229 (normalized) V49
#>  +      -0.6751 (normalized) V5
#>  +       3.2782 (normalized) V50
#>  +       0.5618 (normalized) V51
#>  +      -2.3762 (normalized) V52
#>  +      -1.989  (normalized) V53
#>  +      -0.7182 (normalized) V54
#>  +       0.8471 (normalized) V55
#>  +      -0.6886 (normalized) V56
#>  +      -0.1685 (normalized) V57
#>  +      -0.666  (normalized) V58
#>  +      -1.0231 (normalized) V59
#>  +       1.1075 (normalized) V6
#>  +      -0.2588 (normalized) V60
#>  +       1.6336 (normalized) V7
#>  +       1.2748 (normalized) V8
#>  +      -1.287  (normalized) V9
#>  +       5.2   


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2463768