Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> <LearnerClassifNaiveBayesWeka:classif.naive_bayes_weka>: Naive Bayes
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, RWeka
#> * Predict Types:  [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: missings, multiclass, twoclass

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.53)  (0.47)
#> ===============================
#> V1
#>   mean           0.0335  0.0235
#>   std. dev.      0.0239  0.0145
#>   weight sum         74      65
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean           0.2319  0.1688
#>   std. dev.      0.1233  0.1193
#>   weight sum         74      65
#>   precision      0.0043  0.0043
#> 
#> V11
#>   mean           0.2804  0.1861
#>   std. dev.      0.1163   0.126
#>   weight sum         74      65
#>   precision      0.0047  0.0047
#> 
#> V12
#>   mean           0.3018  0.2107
#>   std. dev.      0.1195  0.1453
#>   weight sum         74      65
#>   precision       0.005   0.005
#> 
#> V13
#>   mean            0.315  0.2397
#>   std. dev.      0.1236  0.1355
#>   weight sum         74      65
#>   precision      0.0049  0.0049
#> 
#> V14
#>   mean           0.3122  0.2707
#>   std. dev.      0.1626  0.1676
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V15
#>   mean           0.3088   0.301
#>   std. dev.      0.1891  0.2134
#>   weight sum         74      65
#>   precision      0.0074  0.0074
#> 
#> V16
#>   mean           0.3518  0.3726
#>   std. dev.      0.2029   0.248
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V17
#>   mean           0.3948  0.4365
#>   std. dev.      0.2282  0.2865
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean            0.436  0.4716
#>   std. dev.      0.2438  0.2745
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V19
#>   mean           0.5226  0.4763
#>   std. dev.      0.2506  0.2715
#>   weight sum         74      65
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean           0.0462  0.0307
#>   std. dev.      0.0338  0.0244
#>   weight sum         74      65
#>   precision      0.0013  0.0013
#> 
#> V20
#>   mean            0.597  0.5104
#>   std. dev.      0.2468  0.2673
#>   weight sum         74      65
#>   precision      0.0069  0.0069
#> 
#> V21
#>   mean           0.6513  0.5511
#>   std. dev.      0.2421  0.2499
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V22
#>   mean            0.682  0.5694
#>   std. dev.      0.2271  0.2656
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V23
#>   mean           0.6997  0.6194
#>   std. dev.       0.224    0.24
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V24
#>   mean           0.7183  0.6533
#>   std. dev.      0.2194  0.2399
#>   weight sum         74      65
#>   precision      0.0065  0.0065
#> 
#> V25
#>   mean           0.7106  0.6541
#>   std. dev.      0.2268  0.2611
#>   weight sum         74      65
#>   precision      0.0075  0.0075
#> 
#> V26
#>   mean           0.7416  0.6659
#>   std. dev.      0.2186  0.2546
#>   weight sum         74      65
#>   precision      0.0066  0.0066
#> 
#> V27
#>   mean           0.7608  0.6812
#>   std. dev.      0.2379  0.2408
#>   weight sum         74      65
#>   precision      0.0073  0.0073
#> 
#> V28
#>   mean           0.7513  0.6758
#>   std. dev.      0.2426  0.2229
#>   weight sum         74      65
#>   precision      0.0077  0.0077
#> 
#> V29
#>   mean           0.6822  0.6304
#>   std. dev.      0.2318  0.2287
#>   weight sum         74      65
#>   precision      0.0075  0.0075
#> 
#> V3
#>   mean            0.051   0.038
#>   std. dev.      0.0396    0.03
#>   weight sum         74      65
#>   precision      0.0015  0.0015
#> 
#> V30
#>   mean            0.591  0.5684
#>   std. dev.      0.2063  0.2255
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V31
#>   mean           0.4863  0.5158
#>   std. dev.      0.2243  0.2002
#>   weight sum         74      65
#>   precision      0.0066  0.0066
#> 
#> V32
#>   mean           0.4259  0.4351
#>   std. dev.      0.2122  0.2057
#>   weight sum         74      65
#>   precision      0.0065  0.0065
#> 
#> V33
#>   mean           0.3793  0.4218
#>   std. dev.      0.1784  0.2175
#>   weight sum         74      65
#>   precision      0.0056  0.0056
#> 
#> V34
#>   mean           0.3433  0.4354
#>   std. dev.      0.1762  0.2607
#>   weight sum         74      65
#>   precision      0.0067  0.0067
#> 
#> V35
#>   mean           0.3157  0.4519
#>   std. dev.      0.2316  0.2611
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V36
#>   mean           0.3017  0.4605
#>   std. dev.      0.2384  0.2595
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V37
#>   mean           0.3081  0.4189
#>   std. dev.       0.221  0.2443
#>   weight sum         74      65
#>   precision      0.0066  0.0066
#> 
#> V38
#>   mean           0.3215  0.3513
#>   std. dev.      0.1915  0.2177
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V39
#>   mean           0.3263  0.2948
#>   std. dev.       0.168  0.2054
#>   weight sum         74      65
#>   precision      0.0068  0.0068
#> 
#> V4
#>   mean           0.0615  0.0442
#>   std. dev.      0.0451   0.033
#>   weight sum         74      65
#>   precision       0.002   0.002
#> 
#> V40
#>   mean           0.2944  0.3035
#>   std. dev.      0.1472  0.1888
#>   weight sum         74      65
#>   precision      0.0067  0.0067
#> 
#> V41
#>   mean           0.2976   0.285
#>   std. dev.      0.1594  0.1861
#>   weight sum         74      65
#>   precision      0.0063  0.0063
#> 
#> V42
#>   mean           0.3131  0.2562
#>   std. dev.      0.1808  0.1724
#>   weight sum         74      65
#>   precision      0.0059  0.0059
#> 
#> V43
#>   mean           0.2862  0.2021
#>   std. dev.      0.1431  0.1233
#>   weight sum         74      65
#>   precision      0.0053  0.0053
#> 
#> V44
#>   mean           0.2477  0.1588
#>   std. dev.      0.1492  0.0803
#>   weight sum         74      65
#>   precision      0.0041  0.0041
#> 
#> V45
#>   mean           0.2402  0.1274
#>   std. dev.      0.1749  0.0752
#>   weight sum         74      65
#>   precision      0.0047  0.0047
#> 
#> V46
#>   mean           0.1978  0.1064
#>   std. dev.      0.1552  0.0866
#>   weight sum         74      65
#>   precision      0.0055  0.0055
#> 
#> V47
#>   mean           0.1458  0.0915
#>   std. dev.      0.1024  0.0653
#>   weight sum         74      65
#>   precision       0.004   0.004
#> 
#> V48
#>   mean           0.1079    0.07
#>   std. dev.      0.0704  0.0472
#>   weight sum         74      65
#>   precision      0.0024  0.0024
#> 
#> V49
#>   mean           0.0625  0.0386
#>   std. dev.      0.0373  0.0317
#>   weight sum         74      65
#>   precision      0.0015  0.0015
#> 
#> V5
#>   mean           0.0857  0.0612
#>   std. dev.      0.0539  0.0479
#>   weight sum         74      65
#>   precision      0.0024  0.0024
#> 
#> V50
#>   mean           0.0219  0.0178
#>   std. dev.      0.0128  0.0136
#>   weight sum         74      65
#>   precision      0.0007  0.0007
#> 
#> V51
#>   mean           0.0207  0.0125
#>   std. dev.      0.0154  0.0092
#>   weight sum         74      65
#>   precision      0.0009  0.0009
#> 
#> V52
#>   mean           0.0171  0.0102
#>   std. dev.      0.0119  0.0078
#>   weight sum         74      65
#>   precision      0.0007  0.0007
#> 
#> V53
#>   mean           0.0114  0.0094
#>   std. dev.       0.007  0.0058
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V54
#>   mean           0.0122  0.0089
#>   std. dev.      0.0086  0.0048
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0097  0.0086
#>   std. dev.      0.0082  0.0048
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0088  0.0071
#>   std. dev.      0.0055  0.0046
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V57
#>   mean           0.0075  0.0077
#>   std. dev.      0.0049  0.0054
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V58
#>   mean           0.0083  0.0068
#>   std. dev.      0.0062  0.0048
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V59
#>   mean           0.0089  0.0072
#>   std. dev.      0.0066   0.005
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1124  0.0952
#>   std. dev.       0.055   0.059
#>   weight sum         74      65
#>   precision      0.0022  0.0022
#> 
#> V60
#>   mean           0.0066   0.006
#>   std. dev.      0.0045  0.0035
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V7
#>   mean           0.1229  0.1136
#>   std. dev.      0.0573  0.0636
#>   weight sum         74      65
#>   precision      0.0022  0.0022
#> 
#> V8
#>   mean           0.1498  0.1211
#>   std. dev.       0.085  0.0804
#>   weight sum         74      65
#>   precision      0.0033  0.0033
#> 
#> V9
#>   mean           0.2014  0.1451
#>   std. dev.      0.1165  0.1058
#>   weight sum         74      65
#>   precision       0.005   0.005
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2173913