Shrinkage Discriminant Analysis for classification.
Calls sda::sda() from sda.
Parameters
| Id | Type | Default | Levels | Range |
| lambda | numeric | - | \([0, 1]\) | |
| lambda.var | numeric | - | \([0, 1]\) | |
| lambda.freqs | numeric | - | \([0, 1]\) | |
| diagonal | logical | FALSE | TRUE, FALSE | - |
| verbose | logical | FALSE | TRUE, FALSE | - |
References
Ahdesmaeki, Miika, Strimmer, Korbinian (2010). “Feature selection in omics prediction problems using cat scores and false nondiscovery rate control.” The Annals of Applied Statistics, 4(1). ISSN 1932-6157, doi:10.1214/09-aoas277 , http://dx.doi.org/10.1214/09-AOAS277.
See also
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifSda
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.sda")
print(learner)
#>
#> ── <LearnerClassifSda> (classif.sda): Shrinkage Discriminant Analysis ──────────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and sda
#> • Predict Types: [response] and prob
#> • Feature Types: integer and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: multiclass and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
#> Number of variables: 60
#> Number of observations: 139
#> Number of classes: 2
#>
#> Estimating optimal shrinkage intensity lambda.freq (frequencies): 0.615
#> Estimating variances (pooled across classes)
#> Estimating optimal shrinkage intensity lambda.var (variance vector): 0.0242
#>
#>
#> Computing inverse correlation matrix (pooled across classes)
#> Estimating optimal shrinkage intensity lambda (correlation matrix): 0.1138
print(learner$model)
#> $regularization
#> lambda lambda.var lambda.freqs
#> 0.11383173 0.02417569 0.61500805
#>
#> $freqs
#> M R
#> 0.5207729 0.4792271
#>
#> $alpha
#> M R
#> -3.695530 1.232667
#>
#> $beta
#> V1 V10 V11 V12 V13 V14 V15
#> M 0.3075237 1.365294 3.779204 1.966114 1.024047 -1.532948 -0.6554133
#> R -0.3341840 -1.483656 -4.106837 -2.136564 -1.112825 1.665844 0.7122334
#> V16 V17 V18 V19 V2 V20 V21
#> M -0.7789027 -0.1380161 0.4442139 -0.4096177 1.793072 0.9416295 0.3713993
#> R 0.8464285 0.1499813 -0.4827244 0.4451290 -1.948520 -1.0232627 -0.4035972
#> V22 V23 V24 V25 V26 V27 V28
#> M 0.1320067 1.058036 0.5754058 -1.686729 -0.1763055 0.2153419 0.6224469
#> R -0.1434508 -1.149761 -0.6252898 1.832958 0.1915901 -0.2340107 -0.6764090
#> V29 V3 V30 V31 V32 V33 V34
#> M -0.1832268 -10.73540 0.7605486 -2.022411 0.6021126 -0.3641762 -0.3047151
#> R 0.1991114 11.66609 -0.8264832 2.197741 -0.6543118 0.3957480 0.3311319
#> V35 V36 V37 V38 V39 V4 V40
#> M 0.6045593 -1.487545 -1.578607 0.9075349 0.5409071 10.41976 -2.079081
#> R -0.6569707 1.616506 1.715462 -0.9862123 -0.5878003 -11.32308 2.259324
#> V41 V42 V43 V44 V45 V46 V47
#> M 1.524142 -0.9899745 1.130474 1.097706 1.124803 2.274324 2.064211
#> R -1.656275 1.0757989 -1.228479 -1.192870 -1.222316 -2.471494 -2.243165
#> V48 V49 V5 V50 V51 V52 V53
#> M 0.6632397 9.825792 3.008018 -13.87841 -5.886898 -3.420403 3.570291
#> R -0.7207383 -10.677625 -3.268793 15.08158 6.397254 3.716930 -3.879812
#> V54 V55 V56 V57 V58 V59 V6
#> M 7.591207 -8.593300 -8.123260 -2.793647 2.329945 9.54171 0.2857729
#> R -8.249316 9.338283 8.827494 3.035838 -2.531936 -10.36891 -0.3105475
#> V60 V7 V8 V9
#> M -8.982838 0.1942529 0.02106854 0.4377749
#> R 9.761592 -0.2110934 -0.02289504 -0.4757272
#> attr(,"class")
#> [1] "shrinkage"
#>
#> attr(,"class")
#> [1] "sda"
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
#> Prediction uses 60 features.
# Score the predictions
predictions$score()
#> classif.ce
#> 0.2318841