Shrinkage Discriminant Analysis for classification.
Calls sda::sda() from sda.
Parameters
| Id | Type | Default | Levels | Range |
| lambda | numeric | - | \([0, 1]\) | |
| lambda.var | numeric | - | \([0, 1]\) | |
| lambda.freqs | numeric | - | \([0, 1]\) | |
| diagonal | logical | FALSE | TRUE, FALSE | - |
| verbose | logical | FALSE | TRUE, FALSE | - |
References
Ahdesmaeki, Miika, Strimmer, Korbinian (2010). “Feature selection in omics prediction problems using cat scores and false nondiscovery rate control.” The Annals of Applied Statistics, 4(1). ISSN 1932-6157, doi:10.1214/09-aoas277 , http://dx.doi.org/10.1214/09-AOAS277.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSda
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.sda")
print(learner)
#>
#> ── <LearnerClassifSda> (classif.sda): Shrinkage Discriminant Analysis ──────────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and sda
#> • Predict Types: [response] and prob
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: multiclass and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Number of variables: 60
#> Number of observations: 139
#> Number of classes: 2
#>
#> Estimating optimal shrinkage intensity lambda.freq (frequencies): 1
#> Estimating variances (pooled across classes)
#> Estimating optimal shrinkage intensity lambda.var (variance vector): 0.0227
#>
#>
#> Computing inverse correlation matrix (pooled across classes)
#> Estimating optimal shrinkage intensity lambda (correlation matrix): 0.1058
print(learner$model)
#> $regularization
#> lambda lambda.var lambda.freqs
#> 0.10579158 0.02268487 1.00000000
#>
#> $freqs
#> M R
#> 0.5 0.5
#>
#> $alpha
#> M R
#> -4.442131 1.864821
#>
#> $beta
#> V1 V10 V11 V12 V13 V14 V15
#> M 5.981773 -0.9243798 2.72201 2.696162 1.137289 -0.003835789 0.2036479
#> R -5.981773 0.9243798 -2.72201 -2.696162 -1.137289 0.003835789 -0.2036479
#> V16 V17 V18 V19 V2 V20 V21
#> M -1.130739 -0.3830154 0.6142087 0.9283625 6.330304 -0.02282051 -0.1591195
#> R 1.130739 0.3830154 -0.6142087 -0.9283625 -6.330304 0.02282051 0.1591195
#> V22 V23 V24 V25 V26 V27 V28
#> M 0.2734222 1.241303 0.5876451 -0.7561536 -0.2575321 0.2063491 0.08686382
#> R -0.2734222 -1.241303 -0.5876451 0.7561536 0.2575321 -0.2063491 -0.08686382
#> V29 V3 V30 V31 V32 V33 V34
#> M 0.05979306 -13.35534 1.82236 -2.340755 -0.02154636 0.7458659 0.01142841
#> R -0.05979306 13.35534 -1.82236 2.340755 0.02154636 -0.7458659 -0.01142841
#> V35 V36 V37 V38 V39 V4 V40
#> M 0.2032105 -1.634279 -1.431982 1.531571 1.528567 13.4506 -2.102436
#> R -0.2032105 1.634279 1.431982 -1.531571 -1.528567 -13.4506 2.102436
#> V41 V42 V43 V44 V45 V46 V47
#> M -0.3780219 0.5049759 0.7703334 -1.338935 1.222275 2.903491 1.985189
#> R 0.3780219 -0.5049759 -0.7703334 1.338935 -1.222275 -2.903491 -1.985189
#> V48 V49 V5 V50 V51 V52 V53
#> M 4.296285 8.07823 -1.761855 -18.06372 -3.067673 0.426596 -0.5162367
#> R -4.296285 -8.07823 1.761855 18.06372 3.067673 -0.426596 0.5162367
#> V54 V55 V56 V57 V58 V59 V6 V60
#> M 9.356911 -11.9909 -2.58768 -5.386232 -1.047491 5.042317 1.483546 1.879316
#> R -9.356911 11.9909 2.58768 5.386232 1.047491 -5.042317 -1.483546 -1.879316
#> V7 V8 V9
#> M -3.279059 -3.937788 2.803834
#> R 3.279059 3.937788 -2.803834
#> attr(,"class")
#> [1] "shrinkage"
#>
#> attr(,"class")
#> [1] "sda"
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
#> Prediction uses 60 features.
# Score the predictions
predictions$score()
#> classif.ce
#> 0.2173913