Classification Discriminant Analysis Learner
Source:R/learner_mda_classif_mda.R
mlr_learners_classif.mda.Rd
Mixture Discriminant Analysis.
Calls mda::mda()
from mda.
Parameters
Id | Type | Default | Levels | Range |
criterion | character | misclassification | misclassification, deviance | - |
dimension | integer | - | \([1, \infty)\) | |
eps | numeric | 2.220446e-16 | \([0, \infty)\) | |
iter | integer | 5 | \([1, \infty)\) | |
keep.fitted | logical | TRUE | TRUE, FALSE | - |
method | character | polyreg | polyreg, mars, bruto, gen.ridge | - |
prior | numeric | - | \([0, 1]\) | |
start.method | character | kmeans | kmeans, lvq | - |
sub.df | integer | - | \([1, \infty)\) | |
subclasses | integer | 2 | \((-\infty, \infty)\) | |
tot.df | integer | - | \([1, \infty)\) | |
trace | logical | FALSE | TRUE, FALSE | - |
tries | integer | 5 | \([1, \infty)\) | |
weights | untyped | - | - |
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerClassif
-> LearnerClassifMda
Methods
Inherited methods
mlr3::Learner$base_learner()
mlr3::Learner$configure()
mlr3::Learner$encapsulate()
mlr3::Learner$format()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$print()
mlr3::Learner$reset()
mlr3::Learner$selected_features()
mlr3::Learner$train()
mlr3::LearnerClassif$predict_newdata_fast()
Examples
# Define the Learner
learner = lrn("classif.mda")
print(learner)
#>
#> ── <LearnerClassifMda> (classif.mda): Mixture Discriminant Analysis ────────────
#> • Model: -
#> • Parameters: keep.fitted=FALSE
#> • Packages: mlr3 and mda
#> • Predict Types: [response] and prob
#> • Feature Types: integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: multiclass and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> Call:
#> mda::mda(formula = formula, data = data, keep.fitted = FALSE)
#>
#> Dimension: 5
#>
#> Percent Between-Group Variance Explained:
#> v1 v2 v3 v4 v5
#> 49.64 77.45 89.97 96.13 100.00
#>
#> Degrees of Freedom (per dimension): 61
#>
#> Training Misclassification Error: 0.01439 ( N = 139 )
#>
#> Deviance: 8.163
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.2463768