Density Nonparametric Learner
mlr_learners_dens.nonpar.Rd
Nonparametric density estimation.
Calls sm::sm.density()
from sm.
Meta Information
Task type: “dens”
Predict Types: “pdf”
Feature Types: “integer”, “numeric”
Required Packages: mlr3, mlr3proba, mlr3extralearners, sm
Parameters
Id | Type | Default | Levels | Range |
h | numeric | - | \((-\infty, \infty)\) | |
group | untyped | - | - | |
delta | numeric | - | \((-\infty, \infty)\) | |
h.weights | numeric | 1 | \((-\infty, \infty)\) | |
hmult | untyped | 1 | - | |
method | character | normal | normal, cv, sj, df, aicc | - |
positive | logical | FALSE | TRUE, FALSE | - |
verbose | untyped | 1 | - |
References
Bowman, A.W., Azzalini, A. (1997). Applied Smoothing Techniques for Data Analysis: The Kernel Approach with S-Plus Illustrations, series Oxford Statistical Science Series. OUP Oxford. ISBN 9780191545696, https://books.google.de/books?id=7WBMrZ9umRYC.
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3proba::LearnerDens
-> LearnerDensNonparametric
Examples
# Define the Learner
learner = mlr3::lrn("dens.nonpar")
print(learner)
#> <LearnerDensNonparametric:dens.nonpar>: Nonparametric Density Estimation
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3proba, mlr3extralearners, sm
#> * Predict Types: [pdf]
#> * Feature Types: integer, numeric
#> * Properties: weights
# Define a Task
task = mlr3::tsk("faithful")
# Create train and test set
ids = mlr3::partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> NonparDens()
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> dens.logloss
#> 1.193275