Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> <LearnerClassifNaiveBayesWeka:classif.naive_bayes_weka>: Naive Bayes
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, RWeka
#> * Predict Types:  [response], prob
#> * Feature Types: logical, integer, numeric, factor, ordered
#> * Properties: missings, multiclass, twoclass

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.53)  (0.47)
#> ===============================
#> V1
#>   mean           0.0366  0.0223
#>   std. dev.      0.0284  0.0136
#>   weight sum         74      65
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean            0.258  0.1507
#>   std. dev.      0.1379  0.1045
#>   weight sum         74      65
#>   precision      0.0047  0.0047
#> 
#> V11
#>   mean           0.2856   0.164
#>   std. dev.      0.1131   0.107
#>   weight sum         74      65
#>   precision      0.0044  0.0044
#> 
#> V12
#>   mean           0.2901   0.188
#>   std. dev.      0.1165  0.1369
#>   weight sum         74      65
#>   precision       0.005   0.005
#> 
#> V13
#>   mean           0.3032  0.2236
#>   std. dev.      0.1248  0.1299
#>   weight sum         74      65
#>   precision      0.0051  0.0051
#> 
#> V14
#>   mean           0.3125  0.2689
#>   std. dev.      0.1623  0.1587
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V15
#>   mean           0.3296  0.3095
#>   std. dev.      0.2025  0.2185
#>   weight sum         74      65
#>   precision      0.0073  0.0073
#> 
#> V16
#>   mean           0.3796  0.3792
#>   std. dev.      0.2139  0.2517
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V17
#>   mean           0.4077  0.4168
#>   std. dev.      0.2341  0.2839
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean           0.4509  0.4505
#>   std. dev.      0.2567  0.2611
#>   weight sum         74      65
#>   precision      0.0066  0.0066
#> 
#> V19
#>   mean           0.5394  0.4771
#>   std. dev.      0.2615  0.2612
#>   weight sum         74      65
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean           0.0475  0.0318
#>   std. dev.      0.0409  0.0259
#>   weight sum         74      65
#>   precision      0.0018  0.0018
#> 
#> V20
#>   mean           0.6295  0.5023
#>   std. dev.      0.2672  0.2749
#>   weight sum         74      65
#>   precision      0.0069  0.0069
#> 
#> V21
#>   mean           0.6854  0.5465
#>   std. dev.      0.2613  0.2589
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V22
#>   mean           0.6887  0.5677
#>   std. dev.      0.2387  0.2663
#>   weight sum         74      65
#>   precision      0.0073  0.0073
#> 
#> V23
#>   mean           0.6918  0.6094
#>   std. dev.      0.2522  0.2482
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V24
#>   mean           0.6884  0.6627
#>   std. dev.      0.2458   0.235
#>   weight sum         74      65
#>   precision      0.0074  0.0074
#> 
#> V25
#>   mean           0.6678  0.6889
#>   std. dev.      0.2534  0.2496
#>   weight sum         74      65
#>   precision      0.0074  0.0074
#> 
#> V26
#>   mean           0.6843  0.7104
#>   std. dev.      0.2394  0.2381
#>   weight sum         74      65
#>   precision      0.0065  0.0065
#> 
#> V27
#>   mean           0.6886   0.683
#>   std. dev.      0.2667  0.2306
#>   weight sum         74      65
#>   precision      0.0069  0.0069
#> 
#> V28
#>   mean           0.6953  0.6511
#>   std. dev.      0.2629  0.1955
#>   weight sum         74      65
#>   precision      0.0075  0.0075
#> 
#> V29
#>   mean           0.6417  0.5909
#>   std. dev.      0.2427  0.2233
#>   weight sum         74      65
#>   precision      0.0073  0.0073
#> 
#> V3
#>   mean           0.0512  0.0362
#>   std. dev.      0.0474  0.0314
#>   weight sum         74      65
#>   precision      0.0024  0.0024
#> 
#> V30
#>   mean           0.5786  0.5412
#>   std. dev.      0.2061  0.2334
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V31
#>   mean           0.4734  0.5109
#>   std. dev.      0.2225  0.1951
#>   weight sum         74      65
#>   precision      0.0063  0.0063
#> 
#> V32
#>   mean           0.4198  0.4407
#>   std. dev.      0.2159  0.2112
#>   weight sum         74      65
#>   precision      0.0064  0.0064
#> 
#> V33
#>   mean           0.3948  0.4225
#>   std. dev.       0.208  0.2195
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V34
#>   mean           0.3813  0.4211
#>   std. dev.      0.2217  0.2538
#>   weight sum         74      65
#>   precision      0.0067  0.0067
#> 
#> V35
#>   mean           0.3748  0.4412
#>   std. dev.      0.2654  0.2628
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V36
#>   mean           0.3662  0.4506
#>   std. dev.      0.2665  0.2657
#>   weight sum         74      65
#>   precision      0.0073  0.0073
#> 
#> V37
#>   mean           0.3496  0.4166
#>   std. dev.       0.248  0.2484
#>   weight sum         74      65
#>   precision      0.0066  0.0066
#> 
#> V38
#>   mean           0.3521  0.3338
#>   std. dev.      0.2243  0.2244
#>   weight sum         74      65
#>   precision      0.0066  0.0066
#> 
#> V39
#>   mean           0.3438  0.3087
#>   std. dev.      0.1904  0.2269
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V4
#>   mean           0.0691  0.0419
#>   std. dev.      0.0601  0.0302
#>   weight sum         74      65
#>   precision      0.0033  0.0033
#> 
#> V40
#>   mean           0.3146  0.3326
#>   std. dev.      0.1666  0.2093
#>   weight sum         74      65
#>   precision      0.0067  0.0067
#> 
#> V41
#>   mean           0.3046  0.3069
#>   std. dev.      0.1684  0.1797
#>   weight sum         74      65
#>   precision      0.0062  0.0062
#> 
#> V42
#>   mean           0.3024  0.2772
#>   std. dev.      0.1693  0.1679
#>   weight sum         74      65
#>   precision      0.0058  0.0058
#> 
#> V43
#>   mean           0.2762  0.2246
#>   std. dev.      0.1436  0.1351
#>   weight sum         74      65
#>   precision      0.0055  0.0055
#> 
#> V44
#>   mean           0.2489  0.1823
#>   std. dev.      0.1406  0.1122
#>   weight sum         74      65
#>   precision      0.0055  0.0055
#> 
#> V45
#>   mean           0.2493  0.1544
#>   std. dev.      0.1748  0.1043
#>   weight sum         74      65
#>   precision      0.0051  0.0051
#> 
#> V46
#>   mean           0.2088   0.128
#>   std. dev.      0.1519  0.1006
#>   weight sum         74      65
#>   precision      0.0054  0.0054
#> 
#> V47
#>   mean           0.1521  0.1014
#>   std. dev.      0.1009  0.0715
#>   weight sum         74      65
#>   precision      0.0039  0.0039
#> 
#> V48
#>   mean           0.1135  0.0759
#>   std. dev.      0.0698  0.0507
#>   weight sum         74      65
#>   precision      0.0024  0.0024
#> 
#> V49
#>   mean           0.0667  0.0433
#>   std. dev.      0.0374  0.0336
#>   weight sum         74      65
#>   precision      0.0015  0.0015
#> 
#> V5
#>   mean           0.0903   0.059
#>   std. dev.       0.063  0.0412
#>   weight sum         74      65
#>   precision       0.003   0.003
#> 
#> V50
#>   mean           0.0241   0.018
#>   std. dev.      0.0139  0.0135
#>   weight sum         74      65
#>   precision      0.0007  0.0007
#> 
#> V51
#>   mean           0.0194   0.012
#>   std. dev.      0.0155  0.0081
#>   weight sum         74      65
#>   precision      0.0009  0.0009
#> 
#> V52
#>   mean           0.0168  0.0102
#>   std. dev.      0.0118  0.0063
#>   weight sum         74      65
#>   precision      0.0007  0.0007
#> 
#> V53
#>   mean           0.0111  0.0095
#>   std. dev.      0.0076  0.0058
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean           0.0121  0.0095
#>   std. dev.       0.009  0.0051
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0097  0.0089
#>   std. dev.      0.0082  0.0056
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0093  0.0077
#>   std. dev.      0.0058   0.005
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V57
#>   mean           0.0076  0.0082
#>   std. dev.      0.0049  0.0061
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V58
#>   mean           0.0095  0.0068
#>   std. dev.      0.0064  0.0048
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V59
#>   mean            0.009  0.0065
#>   std. dev.      0.0074  0.0043
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1194  0.0887
#>   std. dev.      0.0563  0.0533
#>   weight sum         74      65
#>   precision       0.002   0.002
#> 
#> V60
#>   mean           0.0072  0.0059
#>   std. dev.      0.0063  0.0036
#>   weight sum         74      65
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean           0.1339  0.1073
#>   std. dev.      0.0615  0.0516
#>   weight sum         74      65
#>   precision      0.0025  0.0025
#> 
#> V8
#>   mean           0.1602  0.1102
#>   std. dev.      0.0985  0.0747
#>   weight sum         74      65
#>   precision      0.0034  0.0034
#> 
#> V9
#>   mean           0.2243  0.1236
#>   std. dev.      0.1379  0.0893
#>   weight sum         74      65
#>   precision       0.005   0.005
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3913043