Skip to contents

Stochastic Gradient Descent for learning various linear models. Calls RWeka::make_Weka_classifier() from RWeka.

Initial parameter values

  • F:

    • Has only 2 out of 5 original loss functions: 0 = hinge loss (SVM) and 1 = log loss (logistic regression) with 0 (hinge loss) still being the default

    • Reason for change: this learner should only contain loss functions appropriate for classification tasks

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.sgd")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
Fcharacter00, 1-
Lnumeric0.01\((-\infty, \infty)\)
Rnumeric1e-04\((-\infty, \infty)\)
Einteger500\((-\infty, \infty)\)
Cnumeric0.001\((-\infty, \infty)\)
Nlogical-TRUE, FALSE-
Mlogical-TRUE, FALSE-
Sinteger1\((-\infty, \infty)\)
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSGD

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifSGD$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifSGD$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifSGD$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.sgd")
print(learner)
#> 
#> ── <LearnerClassifSGD> (classif.sgd): Stochastic Gradient Descent ──────────────
#> • Model: -
#> • Parameters: F=0
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, and twoclass
#> • Other settings: use_weights = 'error', predict_raw = 'FALSE'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Loss function: Hinge loss (SVM)
#> 
#> Class = 
#> 
#>         -3.2815 (normalized) V1
#>  +      -1.1128 (normalized) V10
#>  +      -2.6931 (normalized) V11
#>  +      -2.8282 (normalized) V12
#>  +       1.2584 (normalized) V13
#>  +       0.6384 (normalized) V14
#>  +      -0.9981 (normalized) V15
#>  +       2.2635 (normalized) V16
#>  +       0.5865 (normalized) V17
#>  +       0.3444 (normalized) V18
#>  +      -1.6858 (normalized) V19
#>  +      -0.8702 (normalized) V2
#>  +      -0.7273 (normalized) V20
#>  +       1.0166 (normalized) V21
#>  +       0.5847 (normalized) V22
#>  +      -0.4237 (normalized) V23
#>  +      -2.1342 (normalized) V24
#>  +      -0.4969 (normalized) V25
#>  +       0.6542 (normalized) V26
#>  +       0.4514 (normalized) V27
#>  +      -1.1148 (normalized) V28
#>  +      -0.8136 (normalized) V29
#>  +       1.3239 (normalized) V3
#>  +      -1.3281 (normalized) V30
#>  +       3.2132 (normalized) V31
#>  +       0.8784 (normalized) V32
#>  +       0.1886 (normalized) V33
#>  +      -1.2788 (normalized) V34
#>  +      -0.4299 (normalized) V35
#>  +       4.6085 (normalized) V36
#>  +       1.1064 (normalized) V37
#>  +      -0.542  (normalized) V38
#>  +      -0.1986 (normalized) V39
#>  +      -0.8741 (normalized) V4
#>  +       1.4992 (normalized) V40
#>  +      -0.1388 (normalized) V41
#>  +       0.4978 (normalized) V42
#>  +      -0.341  (normalized) V43
#>  +      -2.6179 (normalized) V44
#>  +      -3.0766 (normalized) V45
#>  +      -1.7817 (normalized) V46
#>  +      -0.6134 (normalized) V47
#>  +      -0.7883 (normalized) V48
#>  +      -2.9695 (normalized) V49
#>  +      -0.082  (normalized) V5
#>  +       3.977  (normalized) V50
#>  +       0.0487 (normalized) V51
#>  +      -3.2543 (normalized) V52
#>  +      -1.6965 (normalized) V53
#>  +      -1.9211 (normalized) V54
#>  +       1.9667 (normalized) V55
#>  +       0.1172 (normalized) V56
#>  +       0.5672 (normalized) V57
#>  +      -0.9135 (normalized) V58
#>  +      -2.6774 (normalized) V59
#>  +      -0.0804 (normalized) V6
#>  +      -0.5669 (normalized) V60
#>  +       1.9455 (normalized) V7
#>  +       1.9616 (normalized) V8
#>  +      -0.6601 (normalized) V9
#>  +       4.54  


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2463768