Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> <LearnerClassifNaiveBayesWeka:classif.naive_bayes_weka>: Naive Bayes
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, RWeka
#> * Predict Types:  [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: missings, multiclass, twoclass

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.54)  (0.46)
#> ===============================
#> V1
#>   mean           0.0326   0.023
#>   std. dev.       0.022  0.0162
#>   weight sum         75      64
#>   precision      0.0009  0.0009
#> 
#> V10
#>   mean           0.2599  0.1639
#>   std. dev.      0.1469  0.1112
#>   weight sum         75      64
#>   precision      0.0051  0.0051
#> 
#> V11
#>   mean           0.2979  0.1775
#>   std. dev.      0.1324  0.1078
#>   weight sum         75      64
#>   precision      0.0052  0.0052
#> 
#> V12
#>   mean           0.3054  0.1902
#>   std. dev.      0.1257  0.1232
#>   weight sum         75      64
#>   precision       0.004   0.004
#> 
#> V13
#>   mean           0.3134  0.2168
#>   std. dev.      0.1212  0.1254
#>   weight sum         75      64
#>   precision      0.0045  0.0045
#> 
#> V14
#>   mean           0.3197  0.2597
#>   std. dev.      0.1587  0.1548
#>   weight sum         75      64
#>   precision      0.0071  0.0071
#> 
#> V15
#>   mean           0.3256  0.3049
#>   std. dev.      0.2008  0.2172
#>   weight sum         75      64
#>   precision      0.0073  0.0073
#> 
#> V16
#>   mean           0.3779  0.3684
#>   std. dev.      0.2117  0.2487
#>   weight sum         75      64
#>   precision      0.0072  0.0072
#> 
#> V17
#>   mean           0.4057   0.424
#>   std. dev.      0.2324  0.2894
#>   weight sum         75      64
#>   precision      0.0069  0.0069
#> 
#> V18
#>   mean            0.458  0.4452
#>   std. dev.      0.2444  0.2709
#>   weight sum         75      64
#>   precision       0.007   0.007
#> 
#> V19
#>   mean           0.5513  0.4453
#>   std. dev.      0.2453  0.2506
#>   weight sum         75      64
#>   precision      0.0068  0.0068
#> 
#> V2
#>   mean           0.0437  0.0313
#>   std. dev.      0.0323  0.0263
#>   weight sum         75      64
#>   precision      0.0013  0.0013
#> 
#> V20
#>   mean           0.6426  0.4739
#>   std. dev.      0.2459  0.2471
#>   weight sum         75      64
#>   precision      0.0069  0.0069
#> 
#> V21
#>   mean           0.6925  0.5143
#>   std. dev.      0.2353  0.2418
#>   weight sum         75      64
#>   precision       0.007   0.007
#> 
#> V22
#>   mean           0.7033  0.5559
#>   std. dev.      0.2346  0.2714
#>   weight sum         75      64
#>   precision      0.0073  0.0073
#> 
#> V23
#>   mean           0.7085  0.5967
#>   std. dev.      0.2424  0.2509
#>   weight sum         75      64
#>   precision      0.0069  0.0069
#> 
#> V24
#>   mean            0.715  0.6244
#>   std. dev.      0.2331  0.2488
#>   weight sum         75      64
#>   precision      0.0073  0.0073
#> 
#> V25
#>   mean           0.7056  0.6412
#>   std. dev.      0.2284  0.2666
#>   weight sum         75      64
#>   precision      0.0075  0.0075
#> 
#> V26
#>   mean           0.7219  0.6753
#>   std. dev.      0.2236  0.2495
#>   weight sum         75      64
#>   precision      0.0065  0.0065
#> 
#> V27
#>   mean           0.7202  0.6803
#>   std. dev.       0.264  0.2216
#>   weight sum         75      64
#>   precision      0.0073  0.0073
#> 
#> V28
#>   mean           0.7095  0.6649
#>   std. dev.      0.2627  0.2035
#>   weight sum         75      64
#>   precision      0.0075  0.0075
#> 
#> V29
#>   mean           0.6454  0.6314
#>   std. dev.      0.2464  0.2267
#>   weight sum         75      64
#>   precision      0.0074  0.0074
#> 
#> V3
#>   mean           0.0481  0.0374
#>   std. dev.      0.0358  0.0305
#>   weight sum         75      64
#>   precision      0.0015  0.0015
#> 
#> V30
#>   mean           0.5727  0.5838
#>   std. dev.      0.2089  0.2292
#>   weight sum         75      64
#>   precision       0.007   0.007
#> 
#> V31
#>   mean             0.47  0.5249
#>   std. dev.      0.2129  0.1981
#>   weight sum         75      64
#>   precision      0.0063  0.0063
#> 
#> V32
#>   mean           0.4226  0.4475
#>   std. dev.      0.2028  0.2205
#>   weight sum         75      64
#>   precision      0.0064  0.0064
#> 
#> V33
#>   mean           0.4064   0.459
#>   std. dev.      0.1914   0.228
#>   weight sum         75      64
#>   precision       0.007   0.007
#> 
#> V34
#>   mean           0.3784  0.4643
#>   std. dev.      0.2088  0.2774
#>   weight sum         75      64
#>   precision      0.0068  0.0068
#> 
#> V35
#>   mean           0.3372  0.4711
#>   std. dev.      0.2539  0.2746
#>   weight sum         75      64
#>   precision      0.0072  0.0072
#> 
#> V36
#>   mean            0.334  0.4928
#>   std. dev.      0.2589  0.2653
#>   weight sum         75      64
#>   precision      0.0072  0.0072
#> 
#> V37
#>   mean           0.3393  0.4545
#>   std. dev.      0.2389  0.2487
#>   weight sum         75      64
#>   precision      0.0067  0.0067
#> 
#> V38
#>   mean           0.3458  0.3624
#>   std. dev.      0.2113  0.2265
#>   weight sum         75      64
#>   precision       0.007   0.007
#> 
#> V39
#>   mean           0.3576  0.3057
#>   std. dev.      0.1863  0.2117
#>   weight sum         75      64
#>   precision      0.0069  0.0069
#> 
#> V4
#>   mean           0.0625  0.0425
#>   std. dev.      0.0441  0.0341
#>   weight sum         75      64
#>   precision       0.002   0.002
#> 
#> V40
#>   mean           0.3233  0.3211
#>   std. dev.      0.1642  0.1899
#>   weight sum         75      64
#>   precision      0.0067  0.0067
#> 
#> V41
#>   mean           0.2928  0.3053
#>   std. dev.      0.1563  0.1742
#>   weight sum         75      64
#>   precision      0.0063  0.0063
#> 
#> V42
#>   mean           0.2982  0.2712
#>   std. dev.      0.1519  0.1659
#>   weight sum         75      64
#>   precision      0.0057  0.0057
#> 
#> V43
#>   mean           0.2747  0.2081
#>   std. dev.      0.1333  0.1199
#>   weight sum         75      64
#>   precision      0.0045  0.0045
#> 
#> V44
#>   mean           0.2407  0.1615
#>   std. dev.      0.1417  0.0815
#>   weight sum         75      64
#>   precision      0.0043  0.0043
#> 
#> V45
#>   mean            0.237  0.1345
#>   std. dev.      0.1657   0.076
#>   weight sum         75      64
#>   precision      0.0052  0.0052
#> 
#> V46
#>   mean           0.1866  0.1155
#>   std. dev.      0.1304  0.0916
#>   weight sum         75      64
#>   precision      0.0045  0.0045
#> 
#> V47
#>   mean           0.1367  0.0951
#>   std. dev.      0.0784    0.07
#>   weight sum         75      64
#>   precision      0.0032  0.0032
#> 
#> V48
#>   mean           0.1091  0.0726
#>   std. dev.      0.0653  0.0516
#>   weight sum         75      64
#>   precision      0.0022  0.0022
#> 
#> V49
#>   mean           0.0641  0.0405
#>   std. dev.      0.0345  0.0337
#>   weight sum         75      64
#>   precision      0.0015  0.0015
#> 
#> V5
#>   mean           0.0869  0.0635
#>   std. dev.      0.0552  0.0497
#>   weight sum         75      64
#>   precision      0.0025  0.0025
#> 
#> V50
#>   mean            0.022  0.0172
#>   std. dev.      0.0139  0.0119
#>   weight sum         75      64
#>   precision      0.0007  0.0007
#> 
#> V51
#>   mean           0.0172  0.0129
#>   std. dev.      0.0098  0.0092
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V52
#>   mean           0.0149  0.0108
#>   std. dev.      0.0087  0.0078
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V53
#>   mean            0.011  0.0105
#>   std. dev.      0.0078  0.0066
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean           0.0121  0.0097
#>   std. dev.      0.0077   0.005
#>   weight sum         75      64
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0101   0.009
#>   std. dev.      0.0086   0.005
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean            0.009  0.0069
#>   std. dev.      0.0067  0.0039
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean            0.008  0.0074
#>   std. dev.       0.006  0.0051
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0091   0.007
#>   std. dev.      0.0075  0.0048
#>   weight sum         75      64
#>   precision      0.0005  0.0005
#> 
#> V59
#>   mean           0.0085  0.0074
#>   std. dev.      0.0065  0.0052
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1111  0.0947
#>   std. dev.      0.0548  0.0542
#>   weight sum         75      64
#>   precision      0.0023  0.0023
#> 
#> V60
#>   mean           0.0068  0.0064
#>   std. dev.      0.0061  0.0037
#>   weight sum         75      64
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean           0.1276  0.1077
#>   std. dev.      0.0611  0.0605
#>   weight sum         75      64
#>   precision      0.0024  0.0024
#> 
#> V8
#>   mean           0.1542  0.1108
#>   std. dev.      0.0921  0.0755
#>   weight sum         75      64
#>   precision      0.0034  0.0034
#> 
#> V9
#>   mean           0.2177  0.1313
#>   std. dev.       0.131  0.0931
#>   weight sum         75      64
#>   precision       0.005   0.005
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2028986