Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifNaiveBayesWeka$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifNaiveBayesWeka$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.naive_bayes_weka")
print(learner)
#> 
#> ── <LearnerClassifNaiveBayesWeka> (classif.naive_bayes_weka): Naive Bayes ──────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                  (0.5)   (0.5)
#> ===============================
#> V1
#>   mean           0.0378  0.0229
#>   std. dev.      0.0273  0.0162
#>   weight sum         70      69
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean           0.2543  0.1564
#>   std. dev.      0.1368  0.1043
#>   weight sum         70      69
#>   precision      0.0051  0.0051
#> 
#> V11
#>   mean           0.2932  0.1741
#>   std. dev.      0.1415  0.1085
#>   weight sum         70      69
#>   precision      0.0051  0.0051
#> 
#> V12
#>   mean           0.3078  0.1907
#>   std. dev.      0.1339  0.1294
#>   weight sum         70      69
#>   precision      0.0046  0.0046
#> 
#> V13
#>   mean           0.3231  0.2242
#>   std. dev.      0.1327  0.1372
#>   weight sum         70      69
#>   precision      0.0052  0.0052
#> 
#> V14
#>   mean           0.3262  0.2589
#>   std. dev.      0.1688  0.1641
#>   weight sum         70      69
#>   precision      0.0071  0.0071
#> 
#> V15
#>   mean           0.3382  0.2905
#>   std. dev.      0.1957  0.2069
#>   weight sum         70      69
#>   precision      0.0067  0.0067
#> 
#> V16
#>   mean            0.389  0.3549
#>   std. dev.      0.2188  0.2494
#>   weight sum         70      69
#>   precision       0.007   0.007
#> 
#> V17
#>   mean           0.4252  0.4031
#>   std. dev.      0.2489  0.2896
#>   weight sum         70      69
#>   precision      0.0072  0.0072
#> 
#> V18
#>   mean           0.4596  0.4388
#>   std. dev.      0.2548  0.2672
#>   weight sum         70      69
#>   precision      0.0067  0.0067
#> 
#> V19
#>   mean           0.5473  0.4351
#>   std. dev.       0.246  0.2487
#>   weight sum         70      69
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean           0.0477  0.0305
#>   std. dev.      0.0395  0.0252
#>   weight sum         70      69
#>   precision      0.0019  0.0019
#> 
#> V20
#>   mean           0.6177   0.466
#>   std. dev.      0.2467  0.2406
#>   weight sum         70      69
#>   precision      0.0069  0.0069
#> 
#> V21
#>   mean           0.6661  0.5184
#>   std. dev.      0.2346  0.2409
#>   weight sum         70      69
#>   precision      0.0071  0.0071
#> 
#> V22
#>   mean           0.6915  0.5571
#>   std. dev.      0.2252  0.2537
#>   weight sum         70      69
#>   precision      0.0069  0.0069
#> 
#> V23
#>   mean           0.7027  0.6177
#>   std. dev.      0.2333  0.2468
#>   weight sum         70      69
#>   precision      0.0071  0.0071
#> 
#> V24
#>   mean           0.7052  0.6678
#>   std. dev.      0.2348  0.2267
#>   weight sum         70      69
#>   precision      0.0071  0.0071
#> 
#> V25
#>   mean           0.6942  0.6811
#>   std. dev.      0.2388  0.2378
#>   weight sum         70      69
#>   precision      0.0073  0.0073
#> 
#> V26
#>   mean           0.7044   0.694
#>   std. dev.      0.2402    0.23
#>   weight sum         70      69
#>   precision      0.0065  0.0065
#> 
#> V27
#>   mean             0.71   0.674
#>   std. dev.      0.2595  0.2115
#>   weight sum         70      69
#>   precision      0.0071  0.0071
#> 
#> V28
#>   mean           0.7089  0.6587
#>   std. dev.      0.2607  0.2126
#>   weight sum         70      69
#>   precision      0.0073  0.0073
#> 
#> V29
#>   mean           0.6429  0.6348
#>   std. dev.      0.2491  0.2475
#>   weight sum         70      69
#>   precision      0.0076  0.0076
#> 
#> V3
#>   mean           0.0524  0.0359
#>   std. dev.      0.0506  0.0272
#>   weight sum         70      69
#>   precision      0.0024  0.0024
#> 
#> V30
#>   mean           0.5682  0.5847
#>   std. dev.       0.209  0.2303
#>   weight sum         70      69
#>   precision      0.0069  0.0069
#> 
#> V31
#>   mean           0.4861   0.537
#>   std. dev.       0.213  0.1952
#>   weight sum         70      69
#>   precision      0.0063  0.0063
#> 
#> V32
#>   mean           0.4271  0.4738
#>   std. dev.      0.2076  0.2133
#>   weight sum         70      69
#>   precision      0.0064  0.0064
#> 
#> V33
#>   mean           0.3975  0.4664
#>   std. dev.      0.1883  0.2237
#>   weight sum         70      69
#>   precision      0.0068  0.0068
#> 
#> V34
#>   mean           0.3567  0.4524
#>   std. dev.      0.1905  0.2538
#>   weight sum         70      69
#>   precision      0.0068  0.0068
#> 
#> V35
#>   mean           0.3127  0.4569
#>   std. dev.      0.2349  0.2622
#>   weight sum         70      69
#>   precision      0.0072  0.0072
#> 
#> V36
#>   mean           0.2957   0.464
#>   std. dev.      0.2292  0.2627
#>   weight sum         70      69
#>   precision      0.0073  0.0073
#> 
#> V37
#>   mean           0.2945  0.4112
#>   std. dev.      0.2204  0.2528
#>   weight sum         70      69
#>   precision      0.0067  0.0067
#> 
#> V38
#>   mean           0.3081  0.3385
#>   std. dev.      0.1879  0.2338
#>   weight sum         70      69
#>   precision       0.007   0.007
#> 
#> V39
#>   mean            0.315  0.2959
#>   std. dev.      0.1759  0.2171
#>   weight sum         70      69
#>   precision      0.0069  0.0069
#> 
#> V4
#>   mean           0.0683    0.04
#>   std. dev.      0.0622  0.0302
#>   weight sum         70      69
#>   precision      0.0034  0.0034
#> 
#> V40
#>   mean           0.2872  0.3107
#>   std. dev.       0.158  0.1882
#>   weight sum         70      69
#>   precision      0.0066  0.0066
#> 
#> V41
#>   mean           0.2742  0.2851
#>   std. dev.      0.1621  0.1766
#>   weight sum         70      69
#>   precision      0.0064  0.0064
#> 
#> V42
#>   mean           0.2894  0.2563
#>   std. dev.      0.1584  0.1697
#>   weight sum         70      69
#>   precision      0.0056  0.0056
#> 
#> V43
#>   mean           0.2803  0.2188
#>   std. dev.      0.1422  0.1243
#>   weight sum         70      69
#>   precision      0.0055  0.0055
#> 
#> V44
#>   mean           0.2556  0.1776
#>   std. dev.      0.1474  0.0898
#>   weight sum         70      69
#>   precision      0.0043  0.0043
#> 
#> V45
#>   mean            0.242  0.1406
#>   std. dev.      0.1743  0.0918
#>   weight sum         70      69
#>   precision      0.0052  0.0052
#> 
#> V46
#>   mean           0.1935  0.1203
#>   std. dev.      0.1565  0.0943
#>   weight sum         70      69
#>   precision      0.0055  0.0055
#> 
#> V47
#>   mean           0.1507  0.0944
#>   std. dev.      0.1024  0.0704
#>   weight sum         70      69
#>   precision      0.0041  0.0041
#> 
#> V48
#>   mean           0.1128  0.0679
#>   std. dev.      0.0688  0.0503
#>   weight sum         70      69
#>   precision      0.0024  0.0024
#> 
#> V49
#>   mean           0.0635  0.0389
#>   std. dev.      0.0372  0.0324
#>   weight sum         70      69
#>   precision      0.0015  0.0015
#> 
#> V5
#>   mean           0.0933  0.0577
#>   std. dev.      0.0686  0.0483
#>   weight sum         70      69
#>   precision       0.003   0.003
#> 
#> V50
#>   mean           0.0221  0.0182
#>   std. dev.      0.0146  0.0135
#>   weight sum         70      69
#>   precision      0.0008  0.0008
#> 
#> V51
#>   mean           0.0193   0.013
#>   std. dev.       0.015  0.0091
#>   weight sum         70      69
#>   precision      0.0008  0.0008
#> 
#> V52
#>   mean           0.0164  0.0106
#>   std. dev.      0.0119  0.0076
#>   weight sum         70      69
#>   precision      0.0007  0.0007
#> 
#> V53
#>   mean           0.0118  0.0093
#>   std. dev.      0.0076  0.0055
#>   weight sum         70      69
#>   precision      0.0003  0.0003
#> 
#> V54
#>   mean           0.0118   0.009
#>   std. dev.      0.0082  0.0051
#>   weight sum         70      69
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean             0.01  0.0081
#>   std. dev.       0.009  0.0048
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0084  0.0073
#>   std. dev.      0.0063  0.0045
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean           0.0079  0.0078
#>   std. dev.      0.0061  0.0054
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0094  0.0064
#>   std. dev.      0.0077  0.0047
#>   weight sum         70      69
#>   precision      0.0005  0.0005
#> 
#> V59
#>   mean           0.0093  0.0069
#>   std. dev.      0.0072  0.0042
#>   weight sum         70      69
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1139  0.0951
#>   std. dev.      0.0547  0.0679
#>   weight sum         70      69
#>   precision      0.0028  0.0028
#> 
#> V60
#>   mean           0.0069  0.0061
#>   std. dev.      0.0064  0.0034
#>   weight sum         70      69
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean            0.126  0.1155
#>   std. dev.      0.0602  0.0683
#>   weight sum         70      69
#>   precision      0.0027  0.0027
#> 
#> V8
#>   mean           0.1439  0.1186
#>   std. dev.      0.0828  0.0808
#>   weight sum         70      69
#>   precision      0.0034  0.0034
#> 
#> V9
#>   mean           0.2073  0.1354
#>   std. dev.      0.1139  0.0952
#>   weight sum         70      69
#>   precision      0.0049  0.0049
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3043478