Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> <LearnerClassifNaiveBayesWeka:classif.naive_bayes_weka>: Naive Bayes
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, RWeka
#> * Predict Types:  [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: missings, multiclass, twoclass

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.55)  (0.45)
#> ===============================
#> V1
#>   mean           0.0343  0.0228
#>   std. dev.      0.0239  0.0134
#>   weight sum         76      63
#>   precision      0.0009  0.0009
#> 
#> V10
#>   mean           0.2486  0.1683
#>   std. dev.      0.1429  0.1241
#>   weight sum         76      63
#>   precision      0.0051  0.0051
#> 
#> V11
#>   mean            0.292  0.1853
#>   std. dev.      0.1253  0.1267
#>   weight sum         76      63
#>   precision      0.0052  0.0052
#> 
#> V12
#>   mean           0.3011  0.2049
#>   std. dev.      0.1257   0.151
#>   weight sum         76      63
#>   precision       0.005   0.005
#> 
#> V13
#>   mean           0.3119  0.2335
#>   std. dev.      0.1335  0.1537
#>   weight sum         76      63
#>   precision      0.0051  0.0051
#> 
#> V14
#>   mean           0.3178  0.2847
#>   std. dev.      0.1703  0.1704
#>   weight sum         76      63
#>   precision      0.0072  0.0072
#> 
#> V15
#>   mean           0.3315  0.3291
#>   std. dev.      0.1985  0.2262
#>   weight sum         76      63
#>   precision      0.0073  0.0073
#> 
#> V16
#>   mean           0.3826  0.4057
#>   std. dev.      0.2113  0.2643
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V17
#>   mean           0.4128  0.4442
#>   std. dev.      0.2423  0.2941
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean           0.4597   0.461
#>   std. dev.      0.2576  0.2737
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V19
#>   mean           0.5478   0.473
#>   std. dev.      0.2506  0.2585
#>   weight sum         76      63
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean           0.0461   0.032
#>   std. dev.      0.0342  0.0258
#>   weight sum         76      63
#>   precision      0.0013  0.0013
#> 
#> V20
#>   mean           0.6363   0.503
#>   std. dev.      0.2484   0.238
#>   weight sum         76      63
#>   precision      0.0067  0.0067
#> 
#> V21
#>   mean           0.6827  0.5487
#>   std. dev.        0.23  0.2359
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V22
#>   mean           0.6863   0.596
#>   std. dev.      0.2219  0.2497
#>   weight sum         76      63
#>   precision      0.0069  0.0069
#> 
#> V23
#>   mean           0.6834  0.6244
#>   std. dev.      0.2568  0.2424
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V24
#>   mean           0.6795  0.6405
#>   std. dev.      0.2581  0.2393
#>   weight sum         76      63
#>   precision      0.0074  0.0074
#> 
#> V25
#>   mean           0.6734  0.6632
#>   std. dev.      0.2584   0.244
#>   weight sum         76      63
#>   precision      0.0072  0.0072
#> 
#> V26
#>   mean           0.7014  0.6912
#>   std. dev.      0.2436  0.2345
#>   weight sum         76      63
#>   precision      0.0069  0.0069
#> 
#> V27
#>   mean           0.7019  0.6956
#>   std. dev.      0.2742   0.206
#>   weight sum         76      63
#>   precision      0.0076  0.0076
#> 
#> V28
#>   mean           0.6972  0.6739
#>   std. dev.      0.2596     0.2
#>   weight sum         76      63
#>   precision      0.0071  0.0071
#> 
#> V29
#>   mean           0.6316  0.6328
#>   std. dev.      0.2535   0.219
#>   weight sum         76      63
#>   precision      0.0074  0.0074
#> 
#> V3
#>   mean           0.0485  0.0399
#>   std. dev.      0.0376  0.0315
#>   weight sum         76      63
#>   precision      0.0015  0.0015
#> 
#> V30
#>   mean           0.5685  0.5924
#>   std. dev.      0.2118  0.2092
#>   weight sum         76      63
#>   precision      0.0064  0.0064
#> 
#> V31
#>   mean            0.467  0.5362
#>   std. dev.      0.2098  0.1895
#>   weight sum         76      63
#>   precision      0.0066  0.0066
#> 
#> V32
#>   mean           0.4132  0.4403
#>   std. dev.      0.2026   0.196
#>   weight sum         76      63
#>   precision      0.0063  0.0063
#> 
#> V33
#>   mean           0.3988  0.4162
#>   std. dev.      0.1973  0.2027
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V34
#>   mean           0.3701   0.418
#>   std. dev.      0.2119  0.2548
#>   weight sum         76      63
#>   precision      0.0067  0.0067
#> 
#> V35
#>   mean           0.3337  0.4393
#>   std. dev.      0.2497  0.2738
#>   weight sum         76      63
#>   precision      0.0072  0.0072
#> 
#> V36
#>   mean           0.3079  0.4574
#>   std. dev.      0.2498  0.2729
#>   weight sum         76      63
#>   precision      0.0072  0.0072
#> 
#> V37
#>   mean           0.3028  0.4098
#>   std. dev.        0.23  0.2388
#>   weight sum         76      63
#>   precision      0.0066  0.0066
#> 
#> V38
#>   mean           0.3219  0.3424
#>   std. dev.      0.2118  0.2235
#>   weight sum         76      63
#>   precision       0.007   0.007
#> 
#> V39
#>   mean           0.3318  0.3105
#>   std. dev.      0.1914  0.2174
#>   weight sum         76      63
#>   precision      0.0069  0.0069
#> 
#> V4
#>   mean           0.0626   0.043
#>   std. dev.      0.0448  0.0347
#>   weight sum         76      63
#>   precision       0.002   0.002
#> 
#> V40
#>   mean           0.2915  0.3218
#>   std. dev.      0.1645  0.1975
#>   weight sum         76      63
#>   precision      0.0066  0.0066
#> 
#> V41
#>   mean           0.2666  0.2872
#>   std. dev.      0.1586  0.1653
#>   weight sum         76      63
#>   precision      0.0055  0.0055
#> 
#> V42
#>   mean           0.2835  0.2459
#>   std. dev.      0.1609  0.1544
#>   weight sum         76      63
#>   precision      0.0059  0.0059
#> 
#> V43
#>   mean           0.2657  0.2076
#>   std. dev.      0.1272  0.1287
#>   weight sum         76      63
#>   precision      0.0056  0.0056
#> 
#> V44
#>   mean            0.227  0.1781
#>   std. dev.      0.1323  0.1131
#>   weight sum         76      63
#>   precision      0.0058  0.0058
#> 
#> V45
#>   mean           0.2285  0.1413
#>   std. dev.       0.162  0.0935
#>   weight sum         76      63
#>   precision      0.0048  0.0048
#> 
#> V46
#>   mean           0.1846  0.1111
#>   std. dev.      0.1404  0.0862
#>   weight sum         76      63
#>   precision      0.0046  0.0046
#> 
#> V47
#>   mean           0.1362  0.0951
#>   std. dev.      0.0826   0.062
#>   weight sum         76      63
#>   precision      0.0032  0.0032
#> 
#> V48
#>   mean            0.104  0.0704
#>   std. dev.       0.062  0.0451
#>   weight sum         76      63
#>   precision      0.0021  0.0021
#> 
#> V49
#>   mean           0.0608  0.0371
#>   std. dev.      0.0346  0.0261
#>   weight sum         76      63
#>   precision      0.0014  0.0014
#> 
#> V5
#>   mean           0.0849  0.0666
#>   std. dev.      0.0557  0.0509
#>   weight sum         76      63
#>   precision      0.0024  0.0024
#> 
#> V50
#>   mean           0.0224  0.0164
#>   std. dev.      0.0149  0.0111
#>   weight sum         76      63
#>   precision      0.0008  0.0008
#> 
#> V51
#>   mean           0.0181  0.0129
#>   std. dev.      0.0118  0.0084
#>   weight sum         76      63
#>   precision      0.0007  0.0007
#> 
#> V52
#>   mean            0.016  0.0103
#>   std. dev.      0.0099  0.0071
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V53
#>   mean           0.0117    0.01
#>   std. dev.      0.0079  0.0065
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean           0.0114  0.0098
#>   std. dev.      0.0079   0.005
#>   weight sum         76      63
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean             0.01  0.0081
#>   std. dev.      0.0085  0.0043
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0089   0.008
#>   std. dev.      0.0069  0.0049
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean            0.008  0.0079
#>   std. dev.      0.0063  0.0055
#>   weight sum         76      63
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0094  0.0072
#>   std. dev.      0.0081  0.0052
#>   weight sum         76      63
#>   precision      0.0005  0.0005
#> 
#> V59
#>   mean           0.0081  0.0074
#>   std. dev.      0.0058  0.0053
#>   weight sum         76      63
#>   precision      0.0003  0.0003
#> 
#> V6
#>   mean           0.1103  0.1029
#>   std. dev.      0.0561  0.0678
#>   weight sum         76      63
#>   precision      0.0028  0.0028
#> 
#> V60
#>   mean           0.0065  0.0064
#>   std. dev.      0.0042   0.004
#>   weight sum         76      63
#>   precision      0.0002  0.0002
#> 
#> V7
#>   mean           0.1274  0.1207
#>   std. dev.      0.0619  0.0665
#>   weight sum         76      63
#>   precision      0.0028  0.0028
#> 
#> V8
#>   mean           0.1523  0.1274
#>   std. dev.      0.0883  0.0849
#>   weight sum         76      63
#>   precision      0.0034  0.0034
#> 
#> V9
#>   mean           0.2106  0.1465
#>   std. dev.      0.1271  0.1099
#>   weight sum         76      63
#>   precision      0.0049  0.0049
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.2028986