Shrinkage Discriminant Analysis for classification.
Calls sda::sda() from sda.
Parameters
| Id | Type | Default | Levels | Range |
| lambda | numeric | - | \([0, 1]\) | |
| lambda.var | numeric | - | \([0, 1]\) | |
| lambda.freqs | numeric | - | \([0, 1]\) | |
| diagonal | logical | FALSE | TRUE, FALSE | - |
| verbose | logical | FALSE | TRUE, FALSE | - |
References
Ahdesmaeki, Miika, Strimmer, Korbinian (2010). “Feature selection in omics prediction problems using cat scores and false nondiscovery rate control.” The Annals of Applied Statistics, 4(1). ISSN 1932-6157, doi:10.1214/09-aoas277 , http://dx.doi.org/10.1214/09-AOAS277.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSda
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.sda")
print(learner)
#>
#> ── <LearnerClassifSda> (classif.sda): Shrinkage Discriminant Analysis ──────────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and sda
#> • Predict Types: [response] and prob
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: multiclass and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Number of variables: 60
#> Number of observations: 139
#> Number of classes: 2
#>
#> Estimating optimal shrinkage intensity lambda.freq (frequencies): 0.4772
#> Estimating variances (pooled across classes)
#> Estimating optimal shrinkage intensity lambda.var (variance vector): 0.0197
#>
#>
#> Computing inverse correlation matrix (pooled across classes)
#> Estimating optimal shrinkage intensity lambda (correlation matrix): 0.1083
print(learner$model)
#> $regularization
#> lambda lambda.var lambda.freqs
#> 0.10829444 0.01967552 0.47720776
#>
#> $freqs
#> M R
#> 0.5319693 0.4680307
#>
#> $alpha
#> M R
#> -3.306666 1.007997
#>
#> $beta
#> V1 V10 V11 V12 V13 V14 V15
#> M -1.992782 0.4993444 3.198831 2.912087 -1.292663 -0.3198561 -0.2328316
#> R 2.265020 -0.5675609 -3.635829 -3.309913 1.469256 0.3635523 0.2646392
#> V16 V17 V18 V19 V2 V20 V21
#> M -1.156350 -0.3828564 0.5594978 0.5578774 7.387121 -0.3991425 -0.02639912
#> R 1.314321 0.4351592 -0.6359319 -0.6340902 -8.396291 0.4536702 0.03000556
#> V22 V23 V24 V25 V26 V27 V28
#> M 0.8628583 1.064837 0.4894980 -0.5708333 -0.03663153 -0.4754514 0.03462023
#> R -0.9807351 -1.210306 -0.5563693 0.6488160 0.04163584 0.5404038 -0.03934977
#> V29 V3 V30 V31 V32 V33 V34
#> M 0.3516757 -3.900239 0.9207135 -1.347544 -0.2406987 -1.084775 -0.2983859
#> R -0.3997189 4.433059 -1.0464940 1.531635 0.2735811 1.232968 0.3391489
#> V35 V36 V37 V38 V39 V4 V40
#> M 0.5436684 -1.909503 -1.338888 0.7788254 1.348079 6.820133 -1.352004
#> R -0.6179401 2.170364 1.521796 -0.8852223 -1.532242 -7.751845 1.536704
#> V41 V42 V43 V44 V45 V46 V47
#> M 0.5958362 -0.02254651 1.606712 2.099012 1.524089 0.7419555 2.425383
#> R -0.6772346 0.02562664 -1.826208 -2.385762 -1.732298 -0.8433156 -2.756719
#> V48 V49 V5 V50 V51 V52 V53
#> M 4.975737 2.442685 -2.114336 -15.26700 -6.712628 -3.613139 0.8288796
#> R -5.655482 -2.776385 2.403180 17.35266 7.629653 4.106737 -0.9421145
#> V54 V55 V56 V57 V58 V59 V6
#> M -2.23216 -6.401164 1.383201 -3.468994 7.677517 1.339274 2.921926
#> R 2.53710 7.275640 -1.572163 3.942900 -8.726358 -1.522235 -3.321097
#> V60 V7 V8 V9
#> M 0.04927809 -2.962471 -1.568713 1.092050
#> R -0.05601007 3.367181 1.783018 -1.241237
#> attr(,"class")
#> [1] "shrinkage"
#>
#> attr(,"class")
#> [1] "sda"
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
#> Prediction uses 60 features.
# Score the predictions
predictions$score()
#> classif.ce
#> 0.2173913