Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> <LearnerClassifNaiveBayesWeka:classif.naive_bayes_weka>: Naive Bayes
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, RWeka
#> * Predict Types:  [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: missings, multiclass, twoclass

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.58)  (0.42)
#> ===============================
#> V1
#>   mean           0.0369  0.0229
#>   std. dev.      0.0284  0.0132
#>   weight sum         81      58
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean           0.2589  0.1595
#>   std. dev.      0.1405  0.1194
#>   weight sum         81      58
#>   precision      0.0051  0.0051
#> 
#> V11
#>   mean           0.2923   0.171
#>   std. dev.      0.1289  0.1118
#>   weight sum         81      58
#>   precision      0.0052  0.0052
#> 
#> V12
#>   mean           0.2957   0.183
#>   std. dev.       0.123  0.1156
#>   weight sum         81      58
#>   precision      0.0039  0.0039
#> 
#> V13
#>   mean            0.316  0.2173
#>   std. dev.       0.124  0.1272
#>   weight sum         81      58
#>   precision      0.0052  0.0052
#> 
#> V14
#>   mean           0.3244  0.2686
#>   std. dev.      0.1616  0.1715
#>   weight sum         81      58
#>   precision      0.0071  0.0071
#> 
#> V15
#>   mean           0.3373  0.3006
#>   std. dev.      0.1984  0.2259
#>   weight sum         81      58
#>   precision      0.0072  0.0072
#> 
#> V16
#>   mean           0.3883  0.3647
#>   std. dev.      0.2218    0.25
#>   weight sum         81      58
#>   precision       0.007   0.007
#> 
#> V17
#>   mean           0.4215  0.4138
#>   std. dev.      0.2452  0.2806
#>   weight sum         81      58
#>   precision      0.0069  0.0069
#> 
#> V18
#>   mean           0.4575  0.4447
#>   std. dev.      0.2572  0.2671
#>   weight sum         81      58
#>   precision      0.0071  0.0071
#> 
#> V19
#>   mean           0.5395  0.4551
#>   std. dev.      0.2545   0.247
#>   weight sum         81      58
#>   precision      0.0066  0.0066
#> 
#> V2
#>   mean           0.0466  0.0336
#>   std. dev.      0.0385  0.0271
#>   weight sum         81      58
#>   precision      0.0018  0.0018
#> 
#> V20
#>   mean           0.6044  0.4905
#>   std. dev.      0.2588   0.249
#>   weight sum         81      58
#>   precision      0.0067  0.0067
#> 
#> V21
#>   mean           0.6496  0.5346
#>   std. dev.      0.2594   0.247
#>   weight sum         81      58
#>   precision      0.0072  0.0072
#> 
#> V22
#>   mean           0.6517  0.5471
#>   std. dev.      0.2406  0.2617
#>   weight sum         81      58
#>   precision      0.0072  0.0072
#> 
#> V23
#>   mean           0.6637  0.5937
#>   std. dev.      0.2557  0.2525
#>   weight sum         81      58
#>   precision      0.0072  0.0072
#> 
#> V24
#>   mean           0.6806  0.6491
#>   std. dev.      0.2442  0.2356
#>   weight sum         81      58
#>   precision      0.0073  0.0073
#> 
#> V25
#>   mean           0.6677  0.6417
#>   std. dev.      0.2385  0.2653
#>   weight sum         81      58
#>   precision      0.0072  0.0072
#> 
#> V26
#>   mean           0.6836  0.6584
#>   std. dev.      0.2413  0.2388
#>   weight sum         81      58
#>   precision      0.0069  0.0069
#> 
#> V27
#>   mean           0.6936  0.6616
#>   std. dev.       0.267  0.2138
#>   weight sum         81      58
#>   precision      0.0074  0.0074
#> 
#> V28
#>   mean           0.7064  0.6529
#>   std. dev.      0.2662  0.2078
#>   weight sum         81      58
#>   precision      0.0075  0.0075
#> 
#> V29
#>   mean           0.6573  0.6155
#>   std. dev.      0.2575  0.2514
#>   weight sum         81      58
#>   precision      0.0075  0.0075
#> 
#> V3
#>   mean           0.0533  0.0374
#>   std. dev.      0.0476  0.0313
#>   weight sum         81      58
#>   precision      0.0024  0.0024
#> 
#> V30
#>   mean           0.5957   0.596
#>   std. dev.       0.218  0.2366
#>   weight sum         81      58
#>   precision      0.0071  0.0071
#> 
#> V31
#>   mean           0.4957  0.5611
#>   std. dev.      0.2311  0.2015
#>   weight sum         81      58
#>   precision      0.0066  0.0066
#> 
#> V32
#>   mean           0.4423  0.4811
#>   std. dev.      0.2172  0.2093
#>   weight sum         81      58
#>   precision      0.0065  0.0065
#> 
#> V33
#>   mean            0.413  0.4661
#>   std. dev.      0.1981  0.2036
#>   weight sum         81      58
#>   precision       0.007   0.007
#> 
#> V34
#>   mean           0.3845  0.4661
#>   std. dev.      0.2117  0.2498
#>   weight sum         81      58
#>   precision      0.0068  0.0068
#> 
#> V35
#>   mean           0.3593  0.4653
#>   std. dev.      0.2411   0.273
#>   weight sum         81      58
#>   precision      0.0072  0.0072
#> 
#> V36
#>   mean           0.3335  0.4893
#>   std. dev.      0.2445  0.2626
#>   weight sum         81      58
#>   precision      0.0073  0.0073
#> 
#> V37
#>   mean            0.326  0.4465
#>   std. dev.       0.228  0.2392
#>   weight sum         81      58
#>   precision      0.0067  0.0067
#> 
#> V38
#>   mean           0.3395  0.3781
#>   std. dev.      0.2065  0.1955
#>   weight sum         81      58
#>   precision      0.0066  0.0066
#> 
#> V39
#>   mean           0.3414  0.3366
#>   std. dev.      0.1859  0.1982
#>   weight sum         81      58
#>   precision      0.0062  0.0062
#> 
#> V4
#>   mean           0.0678  0.0448
#>   std. dev.       0.059  0.0363
#>   weight sum         81      58
#>   precision      0.0033  0.0033
#> 
#> V40
#>   mean           0.3102  0.3349
#>   std. dev.      0.1649   0.182
#>   weight sum         81      58
#>   precision      0.0063  0.0063
#> 
#> V41
#>   mean           0.3064  0.2909
#>   std. dev.      0.1653  0.1726
#>   weight sum         81      58
#>   precision      0.0054  0.0054
#> 
#> V42
#>   mean           0.3097  0.2543
#>   std. dev.      0.1708  0.1616
#>   weight sum         81      58
#>   precision      0.0057  0.0057
#> 
#> V43
#>   mean           0.2806   0.207
#>   std. dev.      0.1356  0.1356
#>   weight sum         81      58
#>   precision      0.0056  0.0056
#> 
#> V44
#>   mean           0.2511  0.1772
#>   std. dev.      0.1407  0.1123
#>   weight sum         81      58
#>   precision      0.0061  0.0061
#> 
#> V45
#>   mean            0.244  0.1407
#>   std. dev.      0.1732  0.0992
#>   weight sum         81      58
#>   precision      0.0047  0.0047
#> 
#> V46
#>   mean           0.2009   0.115
#>   std. dev.      0.1559  0.0821
#>   weight sum         81      58
#>   precision      0.0054  0.0054
#> 
#> V47
#>   mean            0.152  0.0902
#>   std. dev.      0.0968  0.0569
#>   weight sum         81      58
#>   precision      0.0041  0.0041
#> 
#> V48
#>   mean           0.1158  0.0663
#>   std. dev.      0.0699  0.0434
#>   weight sum         81      58
#>   precision      0.0024  0.0024
#> 
#> V49
#>   mean           0.0661  0.0381
#>   std. dev.      0.0359  0.0259
#>   weight sum         81      58
#>   precision      0.0014  0.0014
#> 
#> V5
#>   mean           0.0899  0.0673
#>   std. dev.      0.0638  0.0535
#>   weight sum         81      58
#>   precision       0.003   0.003
#> 
#> V50
#>   mean            0.023  0.0174
#>   std. dev.      0.0145  0.0115
#>   weight sum         81      58
#>   precision      0.0007  0.0007
#> 
#> V51
#>   mean           0.0207  0.0117
#>   std. dev.      0.0148  0.0083
#>   weight sum         81      58
#>   precision      0.0009  0.0009
#> 
#> V52
#>   mean           0.0167  0.0101
#>   std. dev.      0.0114  0.0072
#>   weight sum         81      58
#>   precision      0.0006  0.0006
#> 
#> V53
#>   mean           0.0121    0.01
#>   std. dev.       0.008  0.0064
#>   weight sum         81      58
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean            0.013  0.0099
#>   std. dev.       0.009  0.0055
#>   weight sum         81      58
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0104  0.0089
#>   std. dev.      0.0086  0.0055
#>   weight sum         81      58
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0093  0.0067
#>   std. dev.      0.0065   0.004
#>   weight sum         81      58
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean           0.0084  0.0073
#>   std. dev.      0.0063  0.0045
#>   weight sum         81      58
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0095  0.0064
#>   std. dev.      0.0073   0.005
#>   weight sum         81      58
#>   precision      0.0005  0.0005
#> 
#> V59
#>   mean           0.0091  0.0078
#>   std. dev.      0.0073  0.0059
#>   weight sum         81      58
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1113  0.0978
#>   std. dev.      0.0551  0.0716
#>   weight sum         81      58
#>   precision      0.0028  0.0028
#> 
#> V60
#>   mean            0.007   0.006
#>   std. dev.      0.0064  0.0035
#>   weight sum         81      58
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean           0.1302  0.1142
#>   std. dev.       0.063   0.071
#>   weight sum         81      58
#>   precision      0.0028  0.0028
#> 
#> V8
#>   mean           0.1497  0.1149
#>   std. dev.      0.0904  0.0791
#>   weight sum         81      58
#>   precision      0.0033  0.0033
#> 
#> V9
#>   mean           0.2158  0.1389
#>   std. dev.      0.1232  0.0963
#>   weight sum         81      58
#>   precision      0.0047  0.0047
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2318841