Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> <LearnerClassifNaiveBayesWeka:classif.naive_bayes_weka>: Naive Bayes
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, RWeka
#> * Predict Types:  [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: missings, multiclass, twoclass

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.55)  (0.45)
#> ===============================
#> V1
#>   mean           0.0344  0.0228
#>   std. dev.      0.0264  0.0145
#>   weight sum         76      63
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean           0.2466  0.1525
#>   std. dev.      0.1301  0.1012
#>   weight sum         76      63
#>   precision      0.0047  0.0047
#> 
#> V11
#>   mean           0.2883  0.1654
#>   std. dev.       0.114  0.1013
#>   weight sum         76      63
#>   precision      0.0044  0.0044
#> 
#> V12
#>   mean            0.296  0.1854
#>   std. dev.      0.1254  0.1227
#>   weight sum         76      63
#>   precision      0.0046  0.0046
#> 
#> V13
#>   mean           0.3094  0.2284
#>   std. dev.      0.1383  0.1386
#>   weight sum         76      63
#>   precision      0.0052  0.0052
#> 
#> V14
#>   mean           0.3185  0.2677
#>   std. dev.      0.1647   0.172
#>   weight sum         76      63
#>   precision      0.0072  0.0072
#> 
#> V15
#>   mean           0.3249   0.301
#>   std. dev.       0.188   0.214
#>   weight sum         76      63
#>   precision      0.0074  0.0074
#> 
#> V16
#>   mean           0.3734  0.3766
#>   std. dev.      0.2051  0.2518
#>   weight sum         76      63
#>   precision      0.0072  0.0072
#> 
#> V17
#>   mean           0.4161    0.42
#>   std. dev.      0.2238  0.2819
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean           0.4538  0.4499
#>   std. dev.      0.2448  0.2706
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V19
#>   mean           0.5345  0.4635
#>   std. dev.       0.251  0.2686
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V2
#>   mean           0.0436  0.0308
#>   std. dev.      0.0344  0.0262
#>   weight sum         76      63
#>   precision      0.0013  0.0013
#> 
#> V20
#>   mean           0.6235  0.5054
#>   std. dev.      0.2447  0.2687
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V21
#>   mean           0.6836  0.5438
#>   std. dev.      0.2367  0.2539
#>   weight sum         76      63
#>   precision      0.0072  0.0072
#> 
#> V22
#>   mean           0.6921  0.5517
#>   std. dev.      0.2249  0.2631
#>   weight sum         76      63
#>   precision      0.0068  0.0068
#> 
#> V23
#>   mean           0.6857  0.6043
#>   std. dev.      0.2428   0.242
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V24
#>   mean            0.693  0.6532
#>   std. dev.      0.2328  0.2327
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V25
#>   mean           0.6919  0.6681
#>   std. dev.       0.241  0.2444
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V26
#>   mean           0.7389  0.6955
#>   std. dev.      0.2252  0.2433
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V27
#>   mean           0.7607  0.6981
#>   std. dev.      0.2461  0.2329
#>   weight sum         76      63
#>   precision      0.0077  0.0077
#> 
#> V28
#>   mean           0.7494  0.6855
#>   std. dev.      0.2372  0.2166
#>   weight sum         76      63
#>   precision      0.0072  0.0072
#> 
#> V29
#>   mean           0.6771   0.627
#>   std. dev.      0.2237  0.2375
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V3
#>   mean           0.0481  0.0341
#>   std. dev.      0.0364  0.0306
#>   weight sum         76      63
#>   precision      0.0015  0.0015
#> 
#> V30
#>   mean           0.6042  0.5527
#>   std. dev.      0.1969   0.234
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V31
#>   mean           0.4963  0.5251
#>   std. dev.      0.2135  0.2002
#>   weight sum         76      63
#>   precision      0.0066  0.0066
#> 
#> V32
#>   mean           0.4426  0.4604
#>   std. dev.      0.2069   0.223
#>   weight sum         76      63
#>   precision      0.0064  0.0064
#> 
#> V33
#>   mean           0.3957   0.425
#>   std. dev.      0.1865  0.2248
#>   weight sum         76      63
#>   precision      0.0067  0.0067
#> 
#> V34
#>   mean           0.3513  0.4203
#>   std. dev.      0.1941  0.2486
#>   weight sum         76      63
#>   precision      0.0068  0.0068
#> 
#> V35
#>   mean           0.3232  0.4415
#>   std. dev.      0.2454  0.2483
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V36
#>   mean           0.3057  0.4408
#>   std. dev.      0.2493   0.248
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V37
#>   mean           0.3052  0.3989
#>   std. dev.      0.2213  0.2446
#>   weight sum         76      63
#>   precision      0.0066  0.0066
#> 
#> V38
#>   mean           0.3237  0.3413
#>   std. dev.      0.1952  0.2329
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V39
#>   mean           0.3287  0.3139
#>   std. dev.      0.1826  0.2185
#>   weight sum         76      63
#>   precision      0.0069  0.0069
#> 
#> V4
#>   mean           0.0622  0.0406
#>   std. dev.      0.0439   0.031
#>   weight sum         76      63
#>   precision       0.002   0.002
#> 
#> V40
#>   mean           0.2893  0.3127
#>   std. dev.      0.1482   0.206
#>   weight sum         76      63
#>   precision      0.0067  0.0067
#> 
#> V41
#>   mean           0.2801  0.2789
#>   std. dev.      0.1541  0.1755
#>   weight sum         76      63
#>   precision      0.0063  0.0063
#> 
#> V42
#>   mean           0.2972  0.2393
#>   std. dev.      0.1749  0.1627
#>   weight sum         76      63
#>   precision      0.0059  0.0059
#> 
#> V43
#>   mean            0.285  0.1978
#>   std. dev.      0.1366  0.1166
#>   weight sum         76      63
#>   precision      0.0044  0.0044
#> 
#> V44
#>   mean           0.2569  0.1614
#>   std. dev.      0.1386  0.0872
#>   weight sum         76      63
#>   precision       0.004   0.004
#> 
#> V45
#>   mean           0.2514  0.1359
#>   std. dev.      0.1824  0.0856
#>   weight sum         76      63
#>   precision      0.0049  0.0049
#> 
#> V46
#>   mean            0.198  0.1153
#>   std. dev.      0.1533  0.0883
#>   weight sum         76      63
#>   precision      0.0047  0.0047
#> 
#> V47
#>   mean           0.1383  0.0873
#>   std. dev.      0.0848  0.0646
#>   weight sum         76      63
#>   precision      0.0031  0.0031
#> 
#> V48
#>   mean           0.1043  0.0649
#>   std. dev.      0.0574   0.046
#>   weight sum         76      63
#>   precision      0.0021  0.0021
#> 
#> V49
#>   mean            0.064  0.0358
#>   std. dev.      0.0326  0.0296
#>   weight sum         76      63
#>   precision      0.0015  0.0015
#> 
#> V5
#>   mean            0.083    0.06
#>   std. dev.      0.0518   0.047
#>   weight sum         76      63
#>   precision      0.0024  0.0024
#> 
#> V50
#>   mean           0.0232  0.0178
#>   std. dev.      0.0148  0.0129
#>   weight sum         76      63
#>   precision      0.0007  0.0007
#> 
#> V51
#>   mean           0.0186  0.0119
#>   std. dev.      0.0114  0.0081
#>   weight sum         76      63
#>   precision      0.0007  0.0007
#> 
#> V52
#>   mean           0.0153  0.0101
#>   std. dev.      0.0097  0.0062
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V53
#>   mean           0.0112  0.0091
#>   std. dev.      0.0077   0.006
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean           0.0121  0.0094
#>   std. dev.      0.0083  0.0056
#>   weight sum         76      63
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0098  0.0083
#>   std. dev.      0.0088  0.0051
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0088  0.0068
#>   std. dev.      0.0061  0.0042
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean           0.0075  0.0068
#>   std. dev.       0.006  0.0043
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0095  0.0063
#>   std. dev.      0.0082  0.0044
#>   weight sum         76      63
#>   precision      0.0005  0.0005
#> 
#> V59
#>   mean           0.0083  0.0066
#>   std. dev.      0.0062  0.0047
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1084  0.0944
#>   std. dev.      0.0487  0.0702
#>   weight sum         76      63
#>   precision      0.0028  0.0028
#> 
#> V60
#>   mean           0.0063  0.0057
#>   std. dev.      0.0045  0.0031
#>   weight sum         76      63
#>   precision      0.0003  0.0003
#> 
#> V7
#>   mean           0.1218  0.1102
#>   std. dev.      0.0539  0.0674
#>   weight sum         76      63
#>   precision      0.0028  0.0028
#> 
#> V8
#>   mean           0.1501  0.1089
#>   std. dev.      0.0831  0.0725
#>   weight sum         76      63
#>   precision      0.0031  0.0031
#> 
#> V9
#>   mean           0.2163  0.1329
#>   std. dev.      0.1172  0.0862
#>   weight sum         76      63
#>   precision       0.005   0.005
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3188406