Shrinkage Discriminant Analysis for classification.
Calls sda::sda() from sda.
Parameters
| Id | Type | Default | Levels | Range |
| lambda | numeric | - | \([0, 1]\) | |
| lambda.var | numeric | - | \([0, 1]\) | |
| lambda.freqs | numeric | - | \([0, 1]\) | |
| diagonal | logical | FALSE | TRUE, FALSE | - |
| verbose | logical | FALSE | TRUE, FALSE | - |
References
Ahdesmaeki, Miika, Strimmer, Korbinian (2010). “Feature selection in omics prediction problems using cat scores and false nondiscovery rate control.” The Annals of Applied Statistics, 4(1). ISSN 1932-6157, doi:10.1214/09-aoas277 , http://dx.doi.org/10.1214/09-AOAS277.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSda
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.sda")
print(learner)
#>
#> ── <LearnerClassifSda> (classif.sda): Shrinkage Discriminant Analysis ──────────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and sda
#> • Predict Types: [response] and prob
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: multiclass and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Number of variables: 60
#> Number of observations: 139
#> Number of classes: 2
#>
#> Estimating optimal shrinkage intensity lambda.freq (frequencies): 1
#> Estimating variances (pooled across classes)
#> Estimating optimal shrinkage intensity lambda.var (variance vector): 0.0216
#>
#>
#> Computing inverse correlation matrix (pooled across classes)
#> Estimating optimal shrinkage intensity lambda (correlation matrix): 0.1134
print(learner$model)
#> $regularization
#> lambda lambda.var lambda.freqs
#> 0.11341116 0.02160773 1.00000000
#>
#> $freqs
#> M R
#> 0.5 0.5
#>
#> $alpha
#> M R
#> -4.632688 2.132778
#>
#> $beta
#> V1 V10 V11 V12 V13 V14 V15
#> M 3.469485 -1.063034 2.80197 2.133526 0.5527459 -0.3194863 -0.6405018
#> R -3.469485 1.063034 -2.80197 -2.133526 -0.5527459 0.3194863 0.6405018
#> V16 V17 V18 V19 V2 V20 V21
#> M 0.1121537 -0.3709685 0.226505 0.7586813 5.933914 0.6161889 0.3845677
#> R -0.1121537 0.3709685 -0.226505 -0.7586813 -5.933914 -0.6161889 -0.3845677
#> V22 V23 V24 V25 V26 V27 V28
#> M -0.05830444 0.5650316 0.2709981 -0.6245702 0.01207228 0.5366892 0.397712
#> R 0.05830444 -0.5650316 -0.2709981 0.6245702 -0.01207228 -0.5366892 -0.397712
#> V29 V3 V30 V31 V32 V33 V34
#> M 0.09287878 -7.90592 1.801672 -2.681472 0.1202294 0.9904799 -0.33611
#> R -0.09287878 7.90592 -1.801672 2.681472 -0.1202294 -0.9904799 0.33611
#> V35 V36 V37 V38 V39 V4 V40
#> M 0.2369105 -1.132062 -1.570107 0.6413819 1.519329 9.45887 -1.495248
#> R -0.2369105 1.132062 1.570107 -0.6413819 -1.519329 -9.45887 1.495248
#> V41 V42 V43 V44 V45 V46 V47
#> M -0.2435721 -0.9417966 1.253769 -0.2848458 1.221545 1.607202 0.5886535
#> R 0.2435721 0.9417966 -1.253769 0.2848458 -1.221545 -1.607202 -0.5886535
#> V48 V49 V5 V50 V51 V52 V53
#> M 5.96135 8.966108 -1.392169 -10.25802 -2.547421 -2.340742 1.720018
#> R -5.96135 -8.966108 1.392169 10.25802 2.547421 2.340742 -1.720018
#> V54 V55 V56 V57 V58 V59 V6
#> M 2.884557 -13.09615 -4.175736 -3.16741 0.4299775 11.76691 -0.2787231
#> R -2.884557 13.09615 4.175736 3.16741 -0.4299775 -11.76691 0.2787231
#> V60 V7 V8 V9
#> M 1.085997 -0.9606331 -0.190927 0.5412189
#> R -1.085997 0.9606331 0.190927 -0.5412189
#> attr(,"class")
#> [1] "shrinkage"
#>
#> attr(,"class")
#> [1] "sda"
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
#> Prediction uses 60 features.
# Score the predictions
predictions$score()
#> classif.ce
#> 0.2028986