Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Active bindings

marshaled

(logical(1))
Whether the learner has been marshaled.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method marshal()

Marshal the learner's model.

Usage

LearnerClassifNaiveBayesWeka$marshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::marshal_model().


Method unmarshal()

Unmarshal the learner's model.

Usage

LearnerClassifNaiveBayesWeka$unmarshal(...)

Arguments

...

(any)
Additional arguments passed to mlr3::unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = lrn("classif.naive_bayes_weka")
print(learner)
#> 
#> ── <LearnerClassifNaiveBayesWeka> (classif.naive_bayes_weka): Naive Bayes ──────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.52)  (0.48)
#> ===============================
#> V1
#>   mean           0.0358  0.0235
#>   std. dev.      0.0291  0.0148
#>   weight sum         73      66
#>   precision      0.0011  0.0011
#> 
#> V10
#>   mean           0.2566  0.1633
#>   std. dev.      0.1385  0.1175
#>   weight sum         73      66
#>   precision      0.0051  0.0051
#> 
#> V11
#>   mean           0.2922  0.1784
#>   std. dev.      0.1302  0.1171
#>   weight sum         73      66
#>   precision      0.0052  0.0052
#> 
#> V12
#>   mean           0.3012  0.1943
#>   std. dev.      0.1275  0.1437
#>   weight sum         73      66
#>   precision       0.005   0.005
#> 
#> V13
#>   mean           0.3164  0.2319
#>   std. dev.      0.1307  0.1466
#>   weight sum         73      66
#>   precision      0.0051  0.0051
#> 
#> V14
#>   mean           0.3238  0.2706
#>   std. dev.      0.1604  0.1766
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V15
#>   mean           0.3314  0.3115
#>   std. dev.      0.1937  0.2276
#>   weight sum         73      66
#>   precision      0.0073  0.0073
#> 
#> V16
#>   mean           0.3845  0.3764
#>   std. dev.      0.2167  0.2646
#>   weight sum         73      66
#>   precision      0.0072  0.0072
#> 
#> V17
#>   mean           0.4209  0.4113
#>   std. dev.      0.2436  0.2914
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean           0.4593   0.437
#>   std. dev.      0.2504   0.272
#>   weight sum         73      66
#>   precision       0.007   0.007
#> 
#> V19
#>   mean           0.5352  0.4566
#>   std. dev.      0.2502  0.2519
#>   weight sum         73      66
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean           0.0484  0.0294
#>   std. dev.      0.0415  0.0194
#>   weight sum         73      66
#>   precision      0.0019  0.0019
#> 
#> V20
#>   mean           0.6004  0.4807
#>   std. dev.      0.2657  0.2554
#>   weight sum         73      66
#>   precision      0.0067  0.0067
#> 
#> V21
#>   mean           0.6374  0.5256
#>   std. dev.      0.2668  0.2399
#>   weight sum         73      66
#>   precision       0.007   0.007
#> 
#> V22
#>   mean           0.6466  0.5444
#>   std. dev.      0.2542  0.2438
#>   weight sum         73      66
#>   precision      0.0072  0.0072
#> 
#> V23
#>   mean           0.6571  0.5879
#>   std. dev.      0.2613  0.2302
#>   weight sum         73      66
#>   precision       0.007   0.007
#> 
#> V24
#>   mean           0.6798  0.6454
#>   std. dev.      0.2488   0.229
#>   weight sum         73      66
#>   precision      0.0073  0.0073
#> 
#> V25
#>   mean           0.6804   0.672
#>   std. dev.      0.2276  0.2483
#>   weight sum         73      66
#>   precision      0.0073  0.0073
#> 
#> V26
#>   mean           0.7052  0.7021
#>   std. dev.      0.2297  0.2309
#>   weight sum         73      66
#>   precision      0.0064  0.0064
#> 
#> V27
#>   mean           0.7186  0.7061
#>   std. dev.      0.2644  0.2043
#>   weight sum         73      66
#>   precision      0.0073  0.0073
#> 
#> V28
#>   mean           0.7253  0.6878
#>   std. dev.       0.262  0.1893
#>   weight sum         73      66
#>   precision      0.0076  0.0076
#> 
#> V29
#>   mean           0.6629  0.6385
#>   std. dev.      0.2379  0.2358
#>   weight sum         73      66
#>   precision      0.0073  0.0073
#> 
#> V3
#>   mean           0.0525  0.0361
#>   std. dev.      0.0466  0.0254
#>   weight sum         73      66
#>   precision      0.0023  0.0023
#> 
#> V30
#>   mean           0.5813  0.5911
#>   std. dev.      0.2019  0.2447
#>   weight sum         73      66
#>   precision       0.007   0.007
#> 
#> V31
#>   mean           0.4905  0.5457
#>   std. dev.       0.217  0.2132
#>   weight sum         73      66
#>   precision      0.0061  0.0061
#> 
#> V32
#>   mean           0.4345  0.4682
#>   std. dev.      0.2082  0.2118
#>   weight sum         73      66
#>   precision      0.0065  0.0065
#> 
#> V33
#>   mean           0.4042  0.4648
#>   std. dev.      0.1743  0.2241
#>   weight sum         73      66
#>   precision      0.0068  0.0068
#> 
#> V34
#>   mean           0.3592  0.4769
#>   std. dev.      0.1878  0.2523
#>   weight sum         73      66
#>   precision      0.0068  0.0068
#> 
#> V35
#>   mean           0.3278  0.4911
#>   std. dev.      0.2425  0.2616
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V36
#>   mean           0.3086  0.4871
#>   std. dev.      0.2552  0.2631
#>   weight sum         73      66
#>   precision      0.0072  0.0072
#> 
#> V37
#>   mean           0.3188  0.4335
#>   std. dev.       0.236  0.2329
#>   weight sum         73      66
#>   precision      0.0067  0.0067
#> 
#> V38
#>   mean           0.3291  0.3595
#>   std. dev.      0.2002  0.2182
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V39
#>   mean           0.3328  0.3117
#>   std. dev.      0.1865  0.1991
#>   weight sum         73      66
#>   precision      0.0063  0.0063
#> 
#> V4
#>   mean           0.0666  0.0395
#>   std. dev.      0.0576  0.0301
#>   weight sum         73      66
#>   precision      0.0032  0.0032
#> 
#> V40
#>   mean           0.3025  0.3236
#>   std. dev.      0.1636  0.1818
#>   weight sum         73      66
#>   precision      0.0065  0.0065
#> 
#> V41
#>   mean           0.2938  0.2959
#>   std. dev.      0.1618  0.1764
#>   weight sum         73      66
#>   precision      0.0054  0.0054
#> 
#> V42
#>   mean           0.3014  0.2661
#>   std. dev.      0.1736  0.1676
#>   weight sum         73      66
#>   precision      0.0058  0.0058
#> 
#> V43
#>   mean            0.277  0.2255
#>   std. dev.      0.1489  0.1347
#>   weight sum         73      66
#>   precision      0.0057  0.0057
#> 
#> V44
#>   mean           0.2557   0.187
#>   std. dev.      0.1454  0.1185
#>   weight sum         73      66
#>   precision      0.0059  0.0059
#> 
#> V45
#>   mean           0.2556  0.1467
#>   std. dev.      0.1819  0.0945
#>   weight sum         73      66
#>   precision      0.0052  0.0052
#> 
#> V46
#>   mean           0.2107  0.1144
#>   std. dev.      0.1622   0.086
#>   weight sum         73      66
#>   precision      0.0054  0.0054
#> 
#> V47
#>   mean           0.1548  0.0877
#>   std. dev.      0.0986  0.0598
#>   weight sum         73      66
#>   precision       0.004   0.004
#> 
#> V48
#>   mean           0.1177  0.0677
#>   std. dev.      0.0671  0.0425
#>   weight sum         73      66
#>   precision      0.0025  0.0025
#> 
#> V49
#>   mean           0.0641   0.037
#>   std. dev.      0.0361   0.027
#>   weight sum         73      66
#>   precision      0.0012  0.0012
#> 
#> V5
#>   mean           0.0885   0.063
#>   std. dev.       0.059  0.0504
#>   weight sum         73      66
#>   precision       0.003   0.003
#> 
#> V50
#>   mean           0.0225  0.0157
#>   std. dev.      0.0139  0.0103
#>   weight sum         73      66
#>   precision      0.0008  0.0008
#> 
#> V51
#>   mean           0.0191  0.0114
#>   std. dev.      0.0127  0.0086
#>   weight sum         73      66
#>   precision      0.0008  0.0008
#> 
#> V52
#>   mean           0.0168  0.0098
#>   std. dev.      0.0117   0.007
#>   weight sum         73      66
#>   precision      0.0007  0.0007
#> 
#> V53
#>   mean           0.0123  0.0088
#>   std. dev.      0.0082  0.0055
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean           0.0114  0.0095
#>   std. dev.      0.0074  0.0052
#>   weight sum         73      66
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0101  0.0079
#>   std. dev.      0.0086  0.0049
#>   weight sum         73      66
#>   precision      0.0005  0.0005
#> 
#> V56
#>   mean           0.0091  0.0077
#>   std. dev.      0.0066   0.005
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean           0.0081  0.0082
#>   std. dev.      0.0062  0.0061
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0089   0.007
#>   std. dev.       0.008  0.0047
#>   weight sum         73      66
#>   precision      0.0005  0.0005
#> 
#> V59
#>   mean           0.0085  0.0074
#>   std. dev.       0.007  0.0054
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1123  0.1018
#>   std. dev.      0.0512  0.0702
#>   weight sum         73      66
#>   precision      0.0029  0.0029
#> 
#> V60
#>   mean           0.0068  0.0058
#>   std. dev.      0.0066  0.0036
#>   weight sum         73      66
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean           0.1292  0.1141
#>   std. dev.      0.0569  0.0674
#>   weight sum         73      66
#>   precision      0.0028  0.0028
#> 
#> V8
#>   mean           0.1474   0.117
#>   std. dev.      0.0769  0.0878
#>   weight sum         73      66
#>   precision      0.0031  0.0031
#> 
#> V9
#>   mean           0.2116  0.1424
#>   std. dev.      0.1161  0.1083
#>   weight sum         73      66
#>   precision      0.0049  0.0049
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3188406