Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> <LearnerClassifNaiveBayesWeka:classif.naive_bayes_weka>: Naive Bayes
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, RWeka
#> * Predict Types:  [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: missings, multiclass, twoclass

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                  (0.5)   (0.5)
#> ===============================
#> V1
#>   mean           0.0405  0.0218
#>   std. dev.      0.0305   0.016
#>   weight sum         70      69
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean           0.2556   0.159
#>   std. dev.      0.1467  0.1173
#>   weight sum         70      69
#>   precision      0.0051  0.0051
#> 
#> V11
#>   mean           0.2988  0.1729
#>   std. dev.      0.1298  0.1188
#>   weight sum         70      69
#>   precision      0.0052  0.0052
#> 
#> V12
#>   mean           0.3079  0.1908
#>   std. dev.      0.1261  0.1371
#>   weight sum         70      69
#>   precision      0.0049  0.0049
#> 
#> V13
#>   mean           0.3177  0.2193
#>   std. dev.      0.1397  0.1443
#>   weight sum         70      69
#>   precision      0.0052  0.0052
#> 
#> V14
#>   mean           0.3287   0.262
#>   std. dev.      0.1607  0.1669
#>   weight sum         70      69
#>   precision      0.0059  0.0059
#> 
#> V15
#>   mean           0.3562  0.2998
#>   std. dev.      0.1999  0.2078
#>   weight sum         70      69
#>   precision      0.0073  0.0073
#> 
#> V16
#>   mean           0.4083  0.3585
#>   std. dev.      0.2199  0.2341
#>   weight sum         70      69
#>   precision      0.0068  0.0068
#> 
#> V17
#>   mean           0.4274  0.3844
#>   std. dev.       0.252  0.2641
#>   weight sum         70      69
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean           0.4524  0.4086
#>   std. dev.      0.2663  0.2478
#>   weight sum         70      69
#>   precision      0.0067  0.0067
#> 
#> V19
#>   mean           0.5261  0.4407
#>   std. dev.      0.2586  0.2449
#>   weight sum         70      69
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean            0.054  0.0277
#>   std. dev.      0.0418  0.0194
#>   weight sum         70      69
#>   precision      0.0018  0.0018
#> 
#> V20
#>   mean           0.6207  0.4841
#>   std. dev.      0.2526  0.2608
#>   weight sum         70      69
#>   precision       0.007   0.007
#> 
#> V21
#>   mean           0.6784  0.5122
#>   std. dev.      0.2425  0.2515
#>   weight sum         70      69
#>   precision      0.0072  0.0072
#> 
#> V22
#>   mean           0.6616  0.5269
#>   std. dev.      0.2402  0.2639
#>   weight sum         70      69
#>   precision      0.0072  0.0072
#> 
#> V23
#>   mean            0.647  0.5983
#>   std. dev.      0.2699  0.2428
#>   weight sum         70      69
#>   precision       0.007   0.007
#> 
#> V24
#>   mean           0.6618  0.6432
#>   std. dev.      0.2672   0.234
#>   weight sum         70      69
#>   precision      0.0072  0.0072
#> 
#> V25
#>   mean           0.6541  0.6496
#>   std. dev.      0.2623  0.2628
#>   weight sum         70      69
#>   precision      0.0073  0.0073
#> 
#> V26
#>   mean           0.6845  0.6797
#>   std. dev.      0.2458  0.2513
#>   weight sum         70      69
#>   precision      0.0065  0.0065
#> 
#> V27
#>   mean           0.6933  0.6859
#>   std. dev.      0.2741  0.2333
#>   weight sum         70      69
#>   precision      0.0072  0.0072
#> 
#> V28
#>   mean           0.6993  0.6855
#>   std. dev.      0.2584  0.2141
#>   weight sum         70      69
#>   precision      0.0076  0.0076
#> 
#> V29
#>   mean           0.6338    0.64
#>   std. dev.       0.245  0.2379
#>   weight sum         70      69
#>   precision      0.0075  0.0075
#> 
#> V3
#>   mean           0.0559  0.0331
#>   std. dev.      0.0466  0.0281
#>   weight sum         70      69
#>   precision      0.0023  0.0023
#> 
#> V30
#>   mean           0.5796  0.5775
#>   std. dev.      0.2143   0.236
#>   weight sum         70      69
#>   precision       0.007   0.007
#> 
#> V31
#>   mean           0.4775  0.5262
#>   std. dev.      0.2148  0.2057
#>   weight sum         70      69
#>   precision      0.0063  0.0063
#> 
#> V32
#>   mean           0.4086  0.4523
#>   std. dev.      0.2053  0.2061
#>   weight sum         70      69
#>   precision      0.0062  0.0062
#> 
#> V33
#>   mean           0.3753  0.4431
#>   std. dev.      0.1864  0.2005
#>   weight sum         70      69
#>   precision      0.0063  0.0063
#> 
#> V34
#>   mean           0.3667  0.4483
#>   std. dev.      0.2144  0.2497
#>   weight sum         70      69
#>   precision      0.0067  0.0067
#> 
#> V35
#>   mean           0.3577  0.4561
#>   std. dev.      0.2609  0.2616
#>   weight sum         70      69
#>   precision      0.0071  0.0071
#> 
#> V36
#>   mean           0.3238  0.4734
#>   std. dev.       0.261  0.2526
#>   weight sum         70      69
#>   precision      0.0071  0.0071
#> 
#> V37
#>   mean            0.321  0.4296
#>   std. dev.      0.2335  0.2429
#>   weight sum         70      69
#>   precision      0.0066  0.0066
#> 
#> V38
#>   mean           0.3366  0.3535
#>   std. dev.      0.2215   0.229
#>   weight sum         70      69
#>   precision       0.007   0.007
#> 
#> V39
#>   mean            0.332  0.3213
#>   std. dev.      0.1898  0.2134
#>   weight sum         70      69
#>   precision       0.007   0.007
#> 
#> V4
#>   mean           0.0692  0.0412
#>   std. dev.      0.0593  0.0304
#>   weight sum         70      69
#>   precision      0.0033  0.0033
#> 
#> V40
#>   mean           0.2923  0.3158
#>   std. dev.      0.1561   0.202
#>   weight sum         70      69
#>   precision      0.0067  0.0067
#> 
#> V41
#>   mean           0.2917  0.2804
#>   std. dev.      0.1628  0.1766
#>   weight sum         70      69
#>   precision      0.0063  0.0063
#> 
#> V42
#>   mean           0.3024  0.2451
#>   std. dev.      0.1718  0.1605
#>   weight sum         70      69
#>   precision      0.0058  0.0058
#> 
#> V43
#>   mean           0.2666  0.1984
#>   std. dev.      0.1458  0.1097
#>   weight sum         70      69
#>   precision      0.0055  0.0055
#> 
#> V44
#>   mean           0.2346  0.1613
#>   std. dev.      0.1377   0.084
#>   weight sum         70      69
#>   precision      0.0043  0.0043
#> 
#> V45
#>   mean           0.2391  0.1318
#>   std. dev.      0.1691  0.0848
#>   weight sum         70      69
#>   precision      0.0047  0.0047
#> 
#> V46
#>   mean           0.1957  0.1126
#>   std. dev.      0.1563   0.093
#>   weight sum         70      69
#>   precision      0.0056  0.0056
#> 
#> V47
#>   mean           0.1415  0.0913
#>   std. dev.      0.0923   0.069
#>   weight sum         70      69
#>   precision      0.0041  0.0041
#> 
#> V48
#>   mean           0.1081  0.0684
#>   std. dev.      0.0623  0.0487
#>   weight sum         70      69
#>   precision      0.0025  0.0025
#> 
#> V49
#>   mean           0.0627  0.0395
#>   std. dev.      0.0343  0.0324
#>   weight sum         70      69
#>   precision      0.0015  0.0015
#> 
#> V5
#>   mean           0.0843  0.0586
#>   std. dev.      0.0609   0.043
#>   weight sum         70      69
#>   precision       0.003   0.003
#> 
#> V50
#>   mean           0.0229   0.018
#>   std. dev.       0.014  0.0134
#>   weight sum         70      69
#>   precision      0.0007  0.0007
#> 
#> V51
#>   mean           0.0202  0.0128
#>   std. dev.      0.0153  0.0094
#>   weight sum         70      69
#>   precision      0.0009  0.0009
#> 
#> V52
#>   mean           0.0171  0.0107
#>   std. dev.      0.0115  0.0075
#>   weight sum         70      69
#>   precision      0.0007  0.0007
#> 
#> V53
#>   mean           0.0123  0.0092
#>   std. dev.      0.0074  0.0056
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean           0.0134  0.0091
#>   std. dev.      0.0088  0.0053
#>   weight sum         70      69
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0103  0.0083
#>   std. dev.      0.0078  0.0048
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0094  0.0072
#>   std. dev.      0.0058  0.0047
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean            0.008  0.0085
#>   std. dev.      0.0055  0.0058
#>   weight sum         70      69
#>   precision      0.0003  0.0003
#> 
#> V58
#>   mean           0.0095  0.0069
#>   std. dev.      0.0072   0.005
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V59
#>   mean           0.0095   0.007
#>   std. dev.      0.0071   0.005
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1163  0.0867
#>   std. dev.      0.0518  0.0518
#>   weight sum         70      69
#>   precision      0.0018  0.0018
#> 
#> V60
#>   mean            0.008  0.0054
#>   std. dev.      0.0067  0.0031
#>   weight sum         70      69
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean           0.1332  0.1047
#>   std. dev.      0.0563  0.0576
#>   weight sum         70      69
#>   precision      0.0025  0.0025
#> 
#> V8
#>   mean           0.1553  0.1144
#>   std. dev.      0.0969  0.0829
#>   weight sum         70      69
#>   precision      0.0034  0.0034
#> 
#> V9
#>   mean            0.222  0.1372
#>   std. dev.      0.1329  0.1011
#>   weight sum         70      69
#>   precision      0.0049  0.0049
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.4492754