Skip to contents

Mixture Discriminant Analysis. Calls mda::mda() from mda.

Initial parameter values

  • keep.fitted: Set to FALSE by default for speed.

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.mda")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, mda

Parameters

IdTypeDefaultLevelsRange
criterioncharactermisclassificationmisclassification, deviance-
dimensioninteger-\([1, \infty)\)
epsnumeric2.220446e-16\([0, \infty)\)
iterinteger5\([1, \infty)\)
keep.fittedlogicalTRUETRUE, FALSE-
methodcharacterpolyregpolyreg, mars, bruto, gen.ridge-
priornumeric-\([0, 1]\)
start.methodcharacterkmeanskmeans, lvq-
sub.dfinteger-\([1, \infty)\)
subclassesinteger2\((-\infty, \infty)\)
tot.dfinteger-\([1, \infty)\)
tracelogicalFALSETRUE, FALSE-
triesinteger5\([1, \infty)\)
weightsuntyped--

See also

Author

annanzrv

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifMda

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifMda$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.mda")
print(learner)
#> 
#> ── <LearnerClassifMda> (classif.mda): Mixture Discriminant Analysis ────────────
#> • Model: -
#> • Parameters: keep.fitted=FALSE
#> • Packages: mlr3 and mda
#> • Predict Types: [response] and prob
#> • Feature Types: integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: multiclass and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Call:
#> mda::mda(formula = formula, data = data, keep.fitted = FALSE)
#> 
#> Dimension: 5 
#> 
#> Percent Between-Group Variance Explained:
#>     v1     v2     v3     v4     v5 
#>  49.64  77.45  89.97  96.13 100.00 
#> 
#> Degrees of Freedom (per dimension): 61 
#> 
#> Training Misclassification Error: 0.01439 ( N = 139 )
#> 
#> Deviance: 8.163 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2463768