Shrinkage Discriminant Analysis for classification.
Calls sda::sda() from sda.
Parameters
| Id | Type | Default | Levels | Range |
| lambda | numeric | - | \([0, 1]\) | |
| lambda.var | numeric | - | \([0, 1]\) | |
| lambda.freqs | numeric | - | \([0, 1]\) | |
| diagonal | logical | FALSE | TRUE, FALSE | - |
| verbose | logical | FALSE | TRUE, FALSE | - |
References
Ahdesmaeki, Miika, Strimmer, Korbinian (2010). “Feature selection in omics prediction problems using cat scores and false nondiscovery rate control.” The Annals of Applied Statistics, 4(1). ISSN 1932-6157, doi:10.1214/09-aoas277 , http://dx.doi.org/10.1214/09-AOAS277.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSda
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.sda")
print(learner)
#>
#> ── <LearnerClassifSda> (classif.sda): Shrinkage Discriminant Analysis ──────────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and sda
#> • Predict Types: [response] and prob
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: multiclass and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Number of variables: 60
#> Number of observations: 139
#> Number of classes: 2
#>
#> Estimating optimal shrinkage intensity lambda.freq (frequencies): 0.3806
#> Estimating variances (pooled across classes)
#> Estimating optimal shrinkage intensity lambda.var (variance vector): 0.0251
#>
#>
#> Computing inverse correlation matrix (pooled across classes)
#> Estimating optimal shrinkage intensity lambda (correlation matrix): 0.1113
print(learner$model)
#> $regularization
#> lambda lambda.var lambda.freqs
#> 0.1112586 0.0251449 0.3805853
#>
#> $freqs
#> M R
#> 0.5423341 0.4576659
#>
#> $alpha
#> M R
#> -4.996040 2.711277
#>
#> $beta
#> V1 V10 V11 V12 V13 V14 V15
#> M 5.591991 -1.363246 4.279834 2.496020 0.9211589 -0.4218282 -0.5748698
#> R -6.626510 1.615447 -5.071604 -2.957784 -1.0915733 0.4998664 0.6812207
#> V16 V17 V18 V19 V2 V20 V21
#> M -0.9638548 -0.1311606 0.7095271 -0.4580365 0.7037277 0.7383013 0.183319
#> R 1.1421679 0.1554253 -0.8407896 0.5427733 -0.8339174 -0.8748871 -0.217233
#> V22 V23 V24 V25 V26 V27 V28
#> M -0.3284902 1.598934 0.6036768 -1.202600 -0.4511181 0.7169939 0.08177867
#> R 0.3892609 -1.894737 -0.7153571 1.425081 0.5345750 -0.8496378 -0.09690773
#> V29 V3 V30 V31 V32 V33 V34
#> M 1.199137 -4.951643 2.246521 -2.638062 -0.3378303 0.1005870 -0.5064155
#> R -1.420977 5.867697 -2.662127 3.126104 0.4003289 -0.1191956 0.6001024
#> V35 V36 V37 V38 V39 V4 V40
#> M -0.0003632293 -1.378164 -1.557674 0.8101511 1.556051 9.528685 -1.947735
#> R 0.0004304267 1.633124 1.845844 -0.9600291 -1.843921 -11.291492 2.308066
#> V41 V42 V43 V44 V45 V46 V47
#> M 0.7796085 -1.495069 0.5001277 0.3409253 1.151597 2.758092 0.4062079
#> R -0.9238360 1.771657 -0.5926513 -0.4039965 -1.364643 -3.268339 -0.4813564
#> V48 V49 V5 V50 V51 V52 V53
#> M 1.940764 8.934196 2.892970 -10.89628 -0.8299130 -0.9095215 -1.179387
#> R -2.299805 -10.587022 -3.428169 12.91209 0.9834469 1.0777830 1.397573
#> V54 V55 V56 V57 V58 V59 V6
#> M 2.398150 -8.907878 -2.498063 -3.006477 3.865046 6.639299 1.946850
#> R -2.841808 10.555836 2.960204 3.562675 -4.580080 -7.867569 -2.307017
#> V60 V7 V8 V9
#> M -0.8505313 -0.5993298 -2.475867 2.682181
#> R 1.0078795 0.7102058 2.933902 -3.178385
#> attr(,"class")
#> [1] "shrinkage"
#>
#> attr(,"class")
#> [1] "sda"
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
#> Prediction uses 60 features.
# Score the predictions
predictions$score()
#> classif.ce
#> 0.3478261