Shrinkage Discriminant Analysis for classification.
Calls sda::sda() from sda.
Parameters
| Id | Type | Default | Levels | Range |
| lambda | numeric | - | \([0, 1]\) | |
| lambda.var | numeric | - | \([0, 1]\) | |
| lambda.freqs | numeric | - | \([0, 1]\) | |
| diagonal | logical | FALSE | TRUE, FALSE | - |
| verbose | logical | FALSE | TRUE, FALSE | - |
References
Ahdesmaeki, Miika, Strimmer, Korbinian (2010). “Feature selection in omics prediction problems using cat scores and false nondiscovery rate control.” The Annals of Applied Statistics, 4(1). ISSN 1932-6157, doi:10.1214/09-aoas277 , http://dx.doi.org/10.1214/09-AOAS277.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter2/data_and_basic_modeling.html#sec-learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSda
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.sda")
print(learner)
#>
#> ── <LearnerClassifSda> (classif.sda): Shrinkage Discriminant Analysis ──────────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and sda
#> • Predict Types: [response] and prob
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: multiclass and twoclass
#> • Other settings: use_weights = 'error', predict_raw = 'FALSE'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Number of variables: 60
#> Number of observations: 139
#> Number of classes: 2
#>
#> Estimating optimal shrinkage intensity lambda.freq (frequencies): 1
#> Estimating variances (pooled across classes)
#> Estimating optimal shrinkage intensity lambda.var (variance vector): 0.0204
#>
#>
#> Computing inverse correlation matrix (pooled across classes)
#> Estimating optimal shrinkage intensity lambda (correlation matrix): 0.1091
print(learner$model)
#> $regularization
#> lambda lambda.var lambda.freqs
#> 0.10911212 0.02041539 1.00000000
#>
#> $freqs
#> M R
#> 0.5 0.5
#>
#> $alpha
#> M R
#> -4.310398 1.135969
#>
#> $beta
#> V1 V10 V11 V12 V13 V14 V15
#> M -2.960019 2.537174 4.491107 4.099336 1.638956 -2.728899 -0.8475671
#> R 2.960019 -2.537174 -4.491107 -4.099336 -1.638956 2.728899 0.8475671
#> V16 V17 V18 V19 V2 V20 V21
#> M -1.755772 -0.4249494 0.3019165 0.1008483 -2.930586 0.7168723 0.804152
#> R 1.755772 0.4249494 -0.3019165 -0.1008483 2.930586 -0.7168723 -0.804152
#> V22 V23 V24 V25 V26 V27 V28
#> M 0.1109363 1.29756 1.345686 -1.077432 -1.850477 -0.675901 1.33755
#> R -0.1109363 -1.29756 -1.345686 1.077432 1.850477 0.675901 -1.33755
#> V29 V3 V30 V31 V32 V33 V34
#> M -1.060161 -8.236826 1.816521 -0.9012121 0.1011908 -0.8166067 -1.540803
#> R 1.060161 8.236826 -1.816521 0.9012121 -0.1011908 0.8166067 1.540803
#> V35 V36 V37 V38 V39 V4 V40
#> M -0.05319385 -1.257508 -0.8193422 1.317181 2.752282 10.56202 -4.240319
#> R 0.05319385 1.257508 0.8193422 -1.317181 -2.752282 -10.56202 4.240319
#> V41 V42 V43 V44 V45 V46 V47
#> M 0.1154964 0.9414536 0.8546166 0.03839112 1.334726 2.833224 5.882296
#> R -0.1154964 -0.9414536 -0.8546166 -0.03839112 -1.334726 -2.833224 -5.882296
#> V48 V49 V5 V50 V51 V52 V53 V54
#> M -3.481797 7.268323 6.008035 -18.2585 -3.64491 -2.09687 -2.66903 1.802922
#> R 3.481797 -7.268323 -6.008035 18.2585 3.64491 2.09687 2.66903 -1.802922
#> V55 V56 V57 V58 V59 V6 V60
#> M -18.12127 0.7300208 2.388298 3.402763 14.5991 -0.7081797 -7.477579
#> R 18.12127 -0.7300208 -2.388298 -3.402763 -14.5991 0.7081797 7.477579
#> V7 V8 V9
#> M 2.540485 -2.479031 2.165364
#> R -2.540485 2.479031 -2.165364
#> attr(,"class")
#> [1] "shrinkage"
#>
#> attr(,"class")
#> [1] "sda"
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
#> Prediction uses 60 features.
# Score the predictions
predictions$score()
#> classif.ce
#> 0.3043478