Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifNaiveBayesWeka$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifNaiveBayesWeka$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.naive_bayes_weka")
print(learner)
#> 
#> ── <LearnerClassifNaiveBayesWeka> (classif.naive_bayes_weka): Naive Bayes ──────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.53)  (0.47)
#> ===============================
#> V1
#>   mean           0.0364  0.0227
#>   std. dev.      0.0291  0.0154
#>   weight sum         74      65
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean           0.2524  0.1537
#>   std. dev.      0.1356  0.1163
#>   weight sum         74      65
#>   precision      0.0047  0.0047
#> 
#> V11
#>   mean           0.2916   0.171
#>   std. dev.      0.1131  0.1192
#>   weight sum         74      65
#>   precision      0.0047  0.0047
#> 
#> V12
#>   mean           0.3026  0.1929
#>   std. dev.      0.1192  0.1348
#>   weight sum         74      65
#>   precision      0.0049  0.0049
#> 
#> V13
#>   mean           0.3183  0.2276
#>   std. dev.      0.1313  0.1357
#>   weight sum         74      65
#>   precision      0.0051  0.0051
#> 
#> V14
#>   mean           0.3283   0.272
#>   std. dev.      0.1666  0.1594
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V15
#>   mean            0.332   0.305
#>   std. dev.      0.1978  0.2168
#>   weight sum         74      65
#>   precision      0.0074  0.0074
#> 
#> V16
#>   mean           0.3805  0.3678
#>   std. dev.      0.2053  0.2486
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V17
#>   mean           0.4086  0.4178
#>   std. dev.      0.2326  0.2853
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean            0.446  0.4618
#>   std. dev.      0.2446  0.2616
#>   weight sum         74      65
#>   precision      0.0071  0.0071
#> 
#> V19
#>   mean           0.5273  0.4795
#>   std. dev.      0.2465  0.2649
#>   weight sum         74      65
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean           0.0441  0.0305
#>   std. dev.      0.0392  0.0252
#>   weight sum         74      65
#>   precision      0.0019  0.0019
#> 
#> V20
#>   mean           0.6206  0.5241
#>   std. dev.      0.2521  0.2674
#>   weight sum         74      65
#>   precision      0.0068  0.0068
#> 
#> V21
#>   mean           0.6763  0.5665
#>   std. dev.      0.2569  0.2625
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V22
#>   mean           0.6846  0.5786
#>   std. dev.       0.252  0.2688
#>   weight sum         74      65
#>   precision      0.0074  0.0074
#> 
#> V23
#>   mean           0.6959  0.6172
#>   std. dev.      0.2464   0.243
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V24
#>   mean             0.71  0.6614
#>   std. dev.      0.2316  0.2358
#>   weight sum         74      65
#>   precision      0.0073  0.0073
#> 
#> V25
#>   mean           0.7091  0.6829
#>   std. dev.      0.2307    0.26
#>   weight sum         74      65
#>   precision      0.0074  0.0074
#> 
#> V26
#>   mean           0.7327   0.692
#>   std. dev.      0.2242  0.2465
#>   weight sum         74      65
#>   precision      0.0064  0.0064
#> 
#> V27
#>   mean           0.7323  0.6914
#>   std. dev.      0.2522  0.2238
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V28
#>   mean           0.7208  0.6834
#>   std. dev.       0.239  0.2056
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V29
#>   mean           0.6412  0.6262
#>   std. dev.      0.2368  0.2323
#>   weight sum         74      65
#>   precision      0.0075  0.0075
#> 
#> V3
#>   mean           0.0492   0.035
#>   std. dev.      0.0454  0.0288
#>   weight sum         74      65
#>   precision      0.0024  0.0024
#> 
#> V30
#>   mean           0.5752  0.5627
#>   std. dev.      0.2094  0.2262
#>   weight sum         74      65
#>   precision      0.0069  0.0069
#> 
#> V31
#>   mean           0.4748  0.5116
#>   std. dev.      0.2312  0.1985
#>   weight sum         74      65
#>   precision      0.0063  0.0063
#> 
#> V32
#>   mean           0.4264  0.4305
#>   std. dev.      0.2201  0.2065
#>   weight sum         74      65
#>   precision      0.0065  0.0065
#> 
#> V33
#>   mean           0.3935  0.4234
#>   std. dev.      0.2005  0.2091
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V34
#>   mean            0.376  0.4229
#>   std. dev.      0.2115  0.2515
#>   weight sum         74      65
#>   precision      0.0068  0.0068
#> 
#> V35
#>   mean           0.3583  0.4126
#>   std. dev.      0.2612  0.2596
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V36
#>   mean           0.3586  0.4184
#>   std. dev.       0.264  0.2569
#>   weight sum         74      65
#>   precision      0.0072  0.0072
#> 
#> V37
#>   mean           0.3426  0.3846
#>   std. dev.      0.2448  0.2444
#>   weight sum         74      65
#>   precision      0.0065  0.0065
#> 
#> V38
#>   mean           0.3539  0.3278
#>   std. dev.      0.2129  0.2007
#>   weight sum         74      65
#>   precision       0.007   0.007
#> 
#> V39
#>   mean           0.3606  0.2972
#>   std. dev.      0.1825  0.1816
#>   weight sum         74      65
#>   precision      0.0061  0.0061
#> 
#> V4
#>   mean            0.066  0.0431
#>   std. dev.      0.0611  0.0317
#>   weight sum         74      65
#>   precision      0.0032  0.0032
#> 
#> V40
#>   mean           0.3189  0.2974
#>   std. dev.      0.1585  0.1544
#>   weight sum         74      65
#>   precision      0.0058  0.0058
#> 
#> V41
#>   mean           0.2963  0.2664
#>   std. dev.      0.1631  0.1423
#>   weight sum         74      65
#>   precision      0.0053  0.0053
#> 
#> V42
#>   mean           0.3088   0.242
#>   std. dev.      0.1674  0.1457
#>   weight sum         74      65
#>   precision      0.0059  0.0059
#> 
#> V43
#>   mean           0.2891  0.1958
#>   std. dev.       0.143  0.1111
#>   weight sum         74      65
#>   precision      0.0055  0.0055
#> 
#> V44
#>   mean           0.2643  0.1558
#>   std. dev.      0.1515  0.0745
#>   weight sum         74      65
#>   precision      0.0045  0.0045
#> 
#> V45
#>   mean            0.265   0.116
#>   std. dev.      0.1732  0.0596
#>   weight sum         74      65
#>   precision      0.0052  0.0052
#> 
#> V46
#>   mean           0.2075  0.0931
#>   std. dev.      0.1484  0.0588
#>   weight sum         74      65
#>   precision      0.0055  0.0055
#> 
#> V47
#>   mean           0.1442  0.0839
#>   std. dev.      0.0909  0.0493
#>   weight sum         74      65
#>   precision      0.0041  0.0041
#> 
#> V48
#>   mean           0.1088  0.0636
#>   std. dev.       0.065  0.0393
#>   weight sum         74      65
#>   precision      0.0025  0.0025
#> 
#> V49
#>   mean           0.0628  0.0361
#>   std. dev.      0.0348  0.0264
#>   weight sum         74      65
#>   precision      0.0012  0.0012
#> 
#> V5
#>   mean           0.0938  0.0592
#>   std. dev.      0.0658  0.0481
#>   weight sum         74      65
#>   precision       0.003   0.003
#> 
#> V50
#>   mean           0.0232  0.0166
#>   std. dev.      0.0144  0.0113
#>   weight sum         74      65
#>   precision      0.0008  0.0008
#> 
#> V51
#>   mean           0.0186  0.0114
#>   std. dev.      0.0139  0.0075
#>   weight sum         74      65
#>   precision      0.0009  0.0009
#> 
#> V52
#>   mean           0.0158  0.0104
#>   std. dev.      0.0112   0.007
#>   weight sum         74      65
#>   precision      0.0007  0.0007
#> 
#> V53
#>   mean           0.0118  0.0097
#>   std. dev.       0.008  0.0061
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean           0.0125  0.0095
#>   std. dev.      0.0088  0.0056
#>   weight sum         74      65
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0105  0.0086
#>   std. dev.      0.0093  0.0054
#>   weight sum         74      65
#>   precision      0.0005  0.0005
#> 
#> V56
#>   mean           0.0089   0.007
#>   std. dev.      0.0069  0.0044
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean           0.0081   0.008
#>   std. dev.       0.006  0.0061
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0093  0.0067
#>   std. dev.      0.0076   0.005
#>   weight sum         74      65
#>   precision      0.0005  0.0005
#> 
#> V59
#>   mean            0.009   0.007
#>   std. dev.      0.0077  0.0048
#>   weight sum         74      65
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1176  0.0909
#>   std. dev.      0.0536  0.0648
#>   weight sum         74      65
#>   precision      0.0028  0.0028
#> 
#> V60
#>   mean           0.0074  0.0058
#>   std. dev.      0.0063  0.0031
#>   weight sum         74      65
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean           0.1309    0.11
#>   std. dev.      0.0574  0.0671
#>   weight sum         74      65
#>   precision      0.0028  0.0028
#> 
#> V8
#>   mean           0.1529  0.1158
#>   std. dev.      0.0956  0.0839
#>   weight sum         74      65
#>   precision      0.0033  0.0033
#> 
#> V9
#>   mean           0.2173  0.1369
#>   std. dev.      0.1307  0.1009
#>   weight sum         74      65
#>   precision      0.0049  0.0049
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.4057971