Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> 
#> ── <LearnerClassifNaiveBayesWeka> (classif.naive_bayes_weka): Naive Bayes ──────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.54)  (0.46)
#> ===============================
#> V1
#>   mean           0.0366  0.0228
#>   std. dev.      0.0264  0.0156
#>   weight sum         75      64
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean            0.252  0.1677
#>   std. dev.      0.1315  0.1257
#>   weight sum         75      64
#>   precision       0.005   0.005
#> 
#> V11
#>   mean           0.2907  0.1804
#>   std. dev.      0.1308  0.1198
#>   weight sum         75      64
#>   precision      0.0051  0.0051
#> 
#> V12
#>   mean           0.3103  0.1915
#>   std. dev.      0.1285  0.1349
#>   weight sum         75      64
#>   precision       0.005   0.005
#> 
#> V13
#>   mean           0.3301  0.2209
#>   std. dev.      0.1348  0.1324
#>   weight sum         75      64
#>   precision       0.005   0.005
#> 
#> V14
#>   mean            0.338  0.2629
#>   std. dev.      0.1663  0.1587
#>   weight sum         75      64
#>   precision      0.0073  0.0073
#> 
#> V15
#>   mean           0.3401  0.3081
#>   std. dev.      0.1924  0.2167
#>   weight sum         75      64
#>   precision      0.0066  0.0066
#> 
#> V16
#>   mean            0.385  0.3769
#>   std. dev.      0.2114  0.2519
#>   weight sum         75      64
#>   precision      0.0072  0.0072
#> 
#> V17
#>   mean           0.4211  0.4125
#>   std. dev.       0.253  0.2845
#>   weight sum         75      64
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean           0.4553  0.4377
#>   std. dev.      0.2695  0.2628
#>   weight sum         75      64
#>   precision       0.007   0.007
#> 
#> V19
#>   mean            0.534   0.455
#>   std. dev.      0.2679  0.2506
#>   weight sum         75      64
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean           0.0501   0.031
#>   std. dev.      0.0409  0.0261
#>   weight sum         75      64
#>   precision      0.0018  0.0018
#> 
#> V20
#>   mean           0.6064  0.5009
#>   std. dev.      0.2615  0.2559
#>   weight sum         75      64
#>   precision      0.0068  0.0068
#> 
#> V21
#>   mean           0.6445   0.555
#>   std. dev.      0.2658  0.2457
#>   weight sum         75      64
#>   precision      0.0071  0.0071
#> 
#> V22
#>   mean           0.6589  0.5938
#>   std. dev.      0.2486  0.2585
#>   weight sum         75      64
#>   precision      0.0072  0.0072
#> 
#> V23
#>   mean           0.6632  0.6182
#>   std. dev.      0.2408  0.2465
#>   weight sum         75      64
#>   precision       0.007   0.007
#> 
#> V24
#>   mean            0.674  0.6504
#>   std. dev.      0.2374  0.2326
#>   weight sum         75      64
#>   precision      0.0073  0.0073
#> 
#> V25
#>   mean           0.6699  0.6668
#>   std. dev.      0.2287  0.2438
#>   weight sum         75      64
#>   precision      0.0075  0.0075
#> 
#> V26
#>   mean           0.7023  0.6795
#>   std. dev.      0.2287  0.2327
#>   weight sum         75      64
#>   precision      0.0071  0.0071
#> 
#> V27
#>   mean           0.7174  0.6753
#>   std. dev.      0.2528  0.2126
#>   weight sum         75      64
#>   precision      0.0075  0.0075
#> 
#> V28
#>   mean           0.7061   0.679
#>   std. dev.      0.2529  0.1919
#>   weight sum         75      64
#>   precision      0.0069  0.0069
#> 
#> V29
#>   mean           0.6399  0.6431
#>   std. dev.      0.2562  0.2297
#>   weight sum         75      64
#>   precision      0.0074  0.0074
#> 
#> V3
#>   mean            0.058   0.035
#>   std. dev.        0.05  0.0312
#>   weight sum         75      64
#>   precision      0.0023  0.0023
#> 
#> V30
#>   mean           0.5771  0.5916
#>   std. dev.      0.2324  0.2344
#>   weight sum         75      64
#>   precision      0.0069  0.0069
#> 
#> V31
#>   mean           0.4996  0.5325
#>   std. dev.      0.2401  0.1981
#>   weight sum         75      64
#>   precision      0.0067  0.0067
#> 
#> V32
#>   mean           0.4356  0.4376
#>   std. dev.      0.2328  0.2172
#>   weight sum         75      64
#>   precision      0.0064  0.0064
#> 
#> V33
#>   mean           0.3969  0.4221
#>   std. dev.      0.1899  0.2184
#>   weight sum         75      64
#>   precision      0.0067  0.0067
#> 
#> V34
#>   mean           0.3628  0.4506
#>   std. dev.      0.1888  0.2702
#>   weight sum         75      64
#>   precision      0.0068  0.0068
#> 
#> V35
#>   mean           0.3307  0.4895
#>   std. dev.      0.2319  0.2668
#>   weight sum         75      64
#>   precision      0.0072  0.0072
#> 
#> V36
#>   mean           0.3109  0.4911
#>   std. dev.      0.2297  0.2574
#>   weight sum         75      64
#>   precision      0.0071  0.0071
#> 
#> V37
#>   mean           0.3116  0.4229
#>   std. dev.      0.2096  0.2306
#>   weight sum         75      64
#>   precision      0.0066  0.0066
#> 
#> V38
#>   mean           0.3294   0.334
#>   std. dev.      0.1814  0.2018
#>   weight sum         75      64
#>   precision       0.007   0.007
#> 
#> V39
#>   mean           0.3362  0.2935
#>   std. dev.      0.1707  0.2035
#>   weight sum         75      64
#>   precision      0.0061  0.0061
#> 
#> V4
#>   mean           0.0737  0.0425
#>   std. dev.      0.0606  0.0329
#>   weight sum         75      64
#>   precision      0.0034  0.0034
#> 
#> V40
#>   mean           0.2935  0.3191
#>   std. dev.      0.1569  0.1863
#>   weight sum         75      64
#>   precision      0.0064  0.0064
#> 
#> V41
#>   mean           0.2966   0.291
#>   std. dev.      0.1728   0.164
#>   weight sum         75      64
#>   precision      0.0054  0.0054
#> 
#> V42
#>   mean           0.3305  0.2479
#>   std. dev.      0.1767  0.1523
#>   weight sum         75      64
#>   precision      0.0059  0.0059
#> 
#> V43
#>   mean           0.2995  0.2122
#>   std. dev.       0.149  0.1238
#>   weight sum         75      64
#>   precision      0.0055  0.0055
#> 
#> V44
#>   mean           0.2618  0.1767
#>   std. dev.      0.1581  0.1171
#>   weight sum         75      64
#>   precision      0.0056  0.0056
#> 
#> V45
#>   mean           0.2612  0.1417
#>   std. dev.      0.1877  0.0912
#>   weight sum         75      64
#>   precision      0.0051  0.0051
#> 
#> V46
#>   mean           0.2122  0.1167
#>   std. dev.      0.1697  0.0833
#>   weight sum         75      64
#>   precision      0.0054  0.0054
#> 
#> V47
#>   mean           0.1602  0.0978
#>   std. dev.      0.1069    0.06
#>   weight sum         75      64
#>   precision       0.004   0.004
#> 
#> V48
#>   mean           0.1219  0.0719
#>   std. dev.      0.0717  0.0451
#>   weight sum         75      64
#>   precision      0.0024  0.0024
#> 
#> V49
#>   mean           0.0677  0.0392
#>   std. dev.      0.0392   0.029
#>   weight sum         75      64
#>   precision      0.0014  0.0014
#> 
#> V5
#>   mean           0.0962  0.0646
#>   std. dev.      0.0656  0.0495
#>   weight sum         75      64
#>   precision       0.003   0.003
#> 
#> V50
#>   mean           0.0239  0.0188
#>   std. dev.       0.015  0.0115
#>   weight sum         75      64
#>   precision      0.0007  0.0007
#> 
#> V51
#>   mean           0.0217   0.013
#>   std. dev.      0.0152  0.0084
#>   weight sum         75      64
#>   precision      0.0009  0.0009
#> 
#> V52
#>   mean           0.0167  0.0108
#>   std. dev.      0.0121  0.0073
#>   weight sum         75      64
#>   precision      0.0006  0.0006
#> 
#> V53
#>   mean           0.0122  0.0101
#>   std. dev.      0.0079  0.0061
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean            0.013  0.0101
#>   std. dev.      0.0084  0.0055
#>   weight sum         75      64
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0101  0.0085
#>   std. dev.      0.0087  0.0049
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0093  0.0072
#>   std. dev.      0.0067  0.0051
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean           0.0081  0.0081
#>   std. dev.      0.0062   0.006
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0095  0.0069
#>   std. dev.      0.0083  0.0048
#>   weight sum         75      64
#>   precision      0.0005  0.0005
#> 
#> V59
#>   mean           0.0093  0.0075
#>   std. dev.      0.0075  0.0054
#>   weight sum         75      64
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean            0.116  0.0998
#>   std. dev.      0.0539  0.0666
#>   weight sum         75      64
#>   precision      0.0028  0.0028
#> 
#> V60
#>   mean           0.0073   0.006
#>   std. dev.      0.0067  0.0039
#>   weight sum         75      64
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean           0.1283  0.1173
#>   std. dev.      0.0577  0.0655
#>   weight sum         75      64
#>   precision      0.0027  0.0027
#> 
#> V8
#>   mean           0.1505  0.1237
#>   std. dev.      0.0807  0.0834
#>   weight sum         75      64
#>   precision      0.0033  0.0033
#> 
#> V9
#>   mean           0.2142   0.145
#>   std. dev.       0.117   0.109
#>   weight sum         75      64
#>   precision      0.0049  0.0049
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3333333