Skip to contents

Naive Bayes Classifier Using Estimator Classes. Calls RWeka::make_Weka_classifier() from RWeka.

Custom mlr3 parameters

  • output_debug_info:

    • original id: output-debug-info

  • do_not_check_capabilities:

    • original id: do-not-check-capabilities

  • num_decimal_places:

    • original id: num-decimal-places

  • batch_size:

    • original id: batch-size

  • Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern

Dictionary

This Learner can be instantiated via lrn():

lrn("classif.naive_bayes_weka")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “factor”, “ordered”

  • Required Packages: mlr3, RWeka

Parameters

IdTypeDefaultLevelsRange
subsetuntyped--
na.actionuntyped--
KlogicalFALSETRUE, FALSE-
DlogicalFALSETRUE, FALSE-
OlogicalFALSETRUE, FALSE-
output_debug_infologicalFALSETRUE, FALSE-
do_not_check_capabilitieslogicalFALSETRUE, FALSE-
num_decimal_placesinteger2\([1, \infty)\)
batch_sizeinteger100\([1, \infty)\)
optionsuntypedNULL-

References

John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.

See also

Author

damirpolat

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifNaiveBayesWeka

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifNaiveBayesWeka$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

# Define the Learner
learner = mlr3::lrn("classif.naive_bayes_weka")
print(learner)
#> 
#> ── <LearnerClassifNaiveBayesWeka> (classif.naive_bayes_weka): Naive Bayes ──────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'

# Define a Task
task = mlr3::tsk("sonar")

# Create train and test set
ids = mlr3::partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

print(learner$model)
#> Naive Bayes Classifier
#> 
#>                  Class
#> Attribute            M       R
#>                 (0.52)  (0.48)
#> ===============================
#> V1
#>   mean           0.0361  0.0239
#>   std. dev.      0.0263  0.0142
#>   weight sum         73      66
#>   precision       0.001   0.001
#> 
#> V10
#>   mean           0.2573  0.1632
#>   std. dev.      0.1465  0.1154
#>   weight sum         73      66
#>   precision      0.0051  0.0051
#> 
#> V11
#>   mean           0.2909  0.1834
#>   std. dev.       0.134  0.1195
#>   weight sum         73      66
#>   precision      0.0052  0.0052
#> 
#> V12
#>   mean           0.2942  0.1975
#>   std. dev.      0.1268  0.1471
#>   weight sum         73      66
#>   precision       0.005   0.005
#> 
#> V13
#>   mean           0.3147   0.228
#>   std. dev.      0.1261  0.1466
#>   weight sum         73      66
#>   precision      0.0049  0.0049
#> 
#> V14
#>   mean           0.3248  0.2737
#>   std. dev.      0.1621  0.1773
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V15
#>   mean           0.3291  0.3073
#>   std. dev.      0.1936  0.2337
#>   weight sum         73      66
#>   precision      0.0073  0.0073
#> 
#> V16
#>   mean           0.3736  0.3695
#>   std. dev.      0.2106  0.2689
#>   weight sum         73      66
#>   precision      0.0072  0.0072
#> 
#> V17
#>   mean             0.41  0.4161
#>   std. dev.      0.2436  0.2889
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V18
#>   mean           0.4474  0.4448
#>   std. dev.      0.2583   0.265
#>   weight sum         73      66
#>   precision      0.0068  0.0068
#> 
#> V19
#>   mean           0.5314  0.4656
#>   std. dev.      0.2533  0.2507
#>   weight sum         73      66
#>   precision      0.0069  0.0069
#> 
#> V2
#>   mean           0.0506  0.0322
#>   std. dev.      0.0401  0.0261
#>   weight sum         73      66
#>   precision      0.0019  0.0019
#> 
#> V20
#>   mean           0.6095  0.5008
#>   std. dev.      0.2552   0.239
#>   weight sum         73      66
#>   precision      0.0065  0.0065
#> 
#> V21
#>   mean           0.6632  0.5474
#>   std. dev.      0.2533  0.2377
#>   weight sum         73      66
#>   precision      0.0072  0.0072
#> 
#> V22
#>   mean           0.6696  0.6008
#>   std. dev.      0.2489  0.2566
#>   weight sum         73      66
#>   precision      0.0073  0.0073
#> 
#> V23
#>   mean           0.6628  0.6527
#>   std. dev.      0.2631  0.2411
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V24
#>   mean           0.6666  0.6795
#>   std. dev.      0.2497  0.2316
#>   weight sum         73      66
#>   precision      0.0074  0.0074
#> 
#> V25
#>   mean           0.6623   0.695
#>   std. dev.      0.2304  0.2389
#>   weight sum         73      66
#>   precision      0.0074  0.0074
#> 
#> V26
#>   mean           0.6766   0.709
#>   std. dev.      0.2304  0.2345
#>   weight sum         73      66
#>   precision      0.0069  0.0069
#> 
#> V27
#>   mean           0.6819  0.6977
#>   std. dev.      0.2641  0.2184
#>   weight sum         73      66
#>   precision      0.0075  0.0075
#> 
#> V28
#>   mean           0.6906  0.6745
#>   std. dev.      0.2623  0.2008
#>   weight sum         73      66
#>   precision      0.0072  0.0072
#> 
#> V29
#>   mean           0.6398  0.6397
#>   std. dev.      0.2505  0.2195
#>   weight sum         73      66
#>   precision      0.0072  0.0072
#> 
#> V3
#>   mean           0.0563  0.0369
#>   std. dev.      0.0488  0.0315
#>   weight sum         73      66
#>   precision      0.0023  0.0023
#> 
#> V30
#>   mean            0.577  0.6012
#>   std. dev.       0.229  0.2227
#>   weight sum         73      66
#>   precision       0.007   0.007
#> 
#> V31
#>   mean           0.4821  0.5487
#>   std. dev.      0.2368  0.1977
#>   weight sum         73      66
#>   precision      0.0063  0.0063
#> 
#> V32
#>   mean            0.425  0.4518
#>   std. dev.      0.2188  0.2127
#>   weight sum         73      66
#>   precision      0.0063  0.0063
#> 
#> V33
#>   mean           0.4018  0.4487
#>   std. dev.      0.1898  0.2197
#>   weight sum         73      66
#>   precision       0.007   0.007
#> 
#> V34
#>   mean             0.39  0.4526
#>   std. dev.      0.1969  0.2674
#>   weight sum         73      66
#>   precision      0.0069  0.0069
#> 
#> V35
#>   mean            0.375  0.4677
#>   std. dev.      0.2448  0.2746
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V36
#>   mean            0.351  0.4795
#>   std. dev.      0.2552   0.266
#>   weight sum         73      66
#>   precision      0.0071  0.0071
#> 
#> V37
#>   mean            0.339   0.428
#>   std. dev.      0.2366  0.2345
#>   weight sum         73      66
#>   precision      0.0067  0.0067
#> 
#> V38
#>   mean           0.3491   0.339
#>   std. dev.      0.2079  0.2067
#>   weight sum         73      66
#>   precision       0.007   0.007
#> 
#> V39
#>   mean           0.3543   0.302
#>   std. dev.      0.1871  0.1997
#>   weight sum         73      66
#>   precision      0.0062  0.0062
#> 
#> V4
#>   mean           0.0698  0.0414
#>   std. dev.      0.0617  0.0339
#>   weight sum         73      66
#>   precision      0.0034  0.0034
#> 
#> V40
#>   mean            0.307  0.3256
#>   std. dev.       0.169  0.1751
#>   weight sum         73      66
#>   precision      0.0064  0.0064
#> 
#> V41
#>   mean           0.3064  0.2924
#>   std. dev.      0.1701  0.1679
#>   weight sum         73      66
#>   precision      0.0054  0.0054
#> 
#> V42
#>   mean           0.3197  0.2581
#>   std. dev.      0.1793  0.1636
#>   weight sum         73      66
#>   precision      0.0059  0.0059
#> 
#> V43
#>   mean             0.29  0.2187
#>   std. dev.      0.1473  0.1386
#>   weight sum         73      66
#>   precision      0.0054  0.0054
#> 
#> V44
#>   mean           0.2542  0.1872
#>   std. dev.      0.1457  0.1165
#>   weight sum         73      66
#>   precision      0.0056  0.0056
#> 
#> V45
#>   mean           0.2627  0.1475
#>   std. dev.      0.1797  0.0928
#>   weight sum         73      66
#>   precision      0.0052  0.0052
#> 
#> V46
#>   mean           0.2212  0.1152
#>   std. dev.      0.1679   0.083
#>   weight sum         73      66
#>   precision      0.0053  0.0053
#> 
#> V47
#>   mean           0.1578  0.0974
#>   std. dev.      0.1059  0.0587
#>   weight sum         73      66
#>   precision       0.004   0.004
#> 
#> V48
#>   mean           0.1169  0.0719
#>   std. dev.      0.0734  0.0444
#>   weight sum         73      66
#>   precision      0.0024  0.0024
#> 
#> V49
#>   mean           0.0678  0.0384
#>   std. dev.      0.0389  0.0274
#>   weight sum         73      66
#>   precision      0.0013  0.0013
#> 
#> V5
#>   mean             0.09  0.0652
#>   std. dev.      0.0665  0.0501
#>   weight sum         73      66
#>   precision      0.0031  0.0031
#> 
#> V50
#>   mean           0.0245  0.0171
#>   std. dev.      0.0156  0.0107
#>   weight sum         73      66
#>   precision      0.0007  0.0007
#> 
#> V51
#>   mean           0.0209  0.0122
#>   std. dev.      0.0153  0.0082
#>   weight sum         73      66
#>   precision      0.0008  0.0008
#> 
#> V52
#>   mean           0.0171  0.0104
#>   std. dev.      0.0123  0.0072
#>   weight sum         73      66
#>   precision      0.0006  0.0006
#> 
#> V53
#>   mean           0.0125  0.0101
#>   std. dev.      0.0084  0.0064
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V54
#>   mean           0.0128  0.0092
#>   std. dev.      0.0087  0.0051
#>   weight sum         73      66
#>   precision      0.0003  0.0003
#> 
#> V55
#>   mean           0.0104  0.0079
#>   std. dev.      0.0085  0.0052
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V56
#>   mean           0.0093  0.0078
#>   std. dev.      0.0068  0.0051
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V57
#>   mean           0.0086  0.0083
#>   std. dev.      0.0063   0.006
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V58
#>   mean           0.0101  0.0069
#>   std. dev.      0.0081  0.0048
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V59
#>   mean           0.0096   0.007
#>   std. dev.      0.0075  0.0048
#>   weight sum         73      66
#>   precision      0.0004  0.0004
#> 
#> V6
#>   mean           0.1151  0.1019
#>   std. dev.      0.0599  0.0707
#>   weight sum         73      66
#>   precision      0.0028  0.0028
#> 
#> V60
#>   mean           0.0075  0.0061
#>   std. dev.      0.0069  0.0039
#>   weight sum         73      66
#>   precision      0.0005  0.0005
#> 
#> V7
#>   mean            0.128  0.1232
#>   std. dev.      0.0618   0.068
#>   weight sum         73      66
#>   precision      0.0027  0.0027
#> 
#> V8
#>   mean           0.1493  0.1303
#>   std. dev.       0.088  0.0852
#>   weight sum         73      66
#>   precision      0.0033  0.0033
#> 
#> V9
#>   mean            0.218  0.1466
#>   std. dev.      0.1288  0.1051
#>   weight sum         73      66
#>   precision      0.0049  0.0049
#> 
#> 


# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
#> classif.ce 
#>  0.3478261