Classification Naive Bayes Learner From Weka
Source:R/learner_RWeka_classif_naive_bayes_weka.R
mlr_learners_classif.naive_bayes_weka.Rd
Naive Bayes Classifier Using Estimator Classes.
Calls RWeka::make_Weka_classifier()
from RWeka.
Custom mlr3 parameters
output_debug_info
:original id: output-debug-info
do_not_check_capabilities
:original id: do-not-check-capabilities
num_decimal_places
:original id: num-decimal-places
batch_size
:original id: batch-size
Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern
Parameters
Id | Type | Default | Levels | Range |
subset | untyped | - | - | |
na.action | untyped | - | - | |
K | logical | FALSE | TRUE, FALSE | - |
D | logical | FALSE | TRUE, FALSE | - |
O | logical | FALSE | TRUE, FALSE | - |
output_debug_info | logical | FALSE | TRUE, FALSE | - |
do_not_check_capabilities | logical | FALSE | TRUE, FALSE | - |
num_decimal_places | integer | 2 | \([1, \infty)\) | |
batch_size | integer | 100 | \([1, \infty)\) | |
options | untyped | NULL | - |
References
John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerClassif
-> LearnerClassifNaiveBayesWeka
Methods
Inherited methods
mlr3::Learner$base_learner()
mlr3::Learner$configure()
mlr3::Learner$encapsulate()
mlr3::Learner$format()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$print()
mlr3::Learner$reset()
mlr3::Learner$selected_features()
mlr3::Learner$train()
mlr3::LearnerClassif$predict_newdata_fast()
Method marshal()
Marshal the learner's model.
Arguments
...
(any)
Additional arguments passed tomlr3::marshal_model()
.
Method unmarshal()
Unmarshal the learner's model.
Arguments
...
(any)
Additional arguments passed tomlr3::unmarshal_model()
.
Examples
# Define the Learner
learner = lrn("classif.naive_bayes_weka")
print(learner)
#>
#> ── <LearnerClassifNaiveBayesWeka> (classif.naive_bayes_weka): Naive Bayes ──────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> Naive Bayes Classifier
#>
#> Class
#> Attribute M R
#> (0.52) (0.48)
#> ===============================
#> V1
#> mean 0.0382 0.0232
#> std. dev. 0.0292 0.0142
#> weight sum 72 67
#> precision 0.0011 0.0011
#>
#> V10
#> mean 0.2629 0.1668
#> std. dev. 0.1472 0.1078
#> weight sum 72 67
#> precision 0.005 0.005
#>
#> V11
#> mean 0.3122 0.1814
#> std. dev. 0.1335 0.1036
#> weight sum 72 67
#> precision 0.0052 0.0052
#>
#> V12
#> mean 0.3239 0.1909
#> std. dev. 0.1241 0.1277
#> weight sum 72 67
#> precision 0.0046 0.0046
#>
#> V13
#> mean 0.3387 0.2278
#> std. dev. 0.1318 0.1362
#> weight sum 72 67
#> precision 0.0051 0.0051
#>
#> V14
#> mean 0.3351 0.2773
#> std. dev. 0.1595 0.1687
#> weight sum 72 67
#> precision 0.0071 0.0071
#>
#> V15
#> mean 0.3327 0.316
#> std. dev. 0.1869 0.22
#> weight sum 72 67
#> precision 0.0072 0.0072
#>
#> V16
#> mean 0.3789 0.3925
#> std. dev. 0.2071 0.2506
#> weight sum 72 67
#> precision 0.0071 0.0071
#>
#> V17
#> mean 0.412 0.446
#> std. dev. 0.2357 0.2861
#> weight sum 72 67
#> precision 0.007 0.007
#>
#> V18
#> mean 0.4502 0.4679
#> std. dev. 0.2565 0.2686
#> weight sum 72 67
#> precision 0.0071 0.0071
#>
#> V19
#> mean 0.5368 0.4735
#> std. dev. 0.2514 0.2509
#> weight sum 72 67
#> precision 0.0069 0.0069
#>
#> V2
#> mean 0.0497 0.032
#> std. dev. 0.0418 0.026
#> weight sum 72 67
#> precision 0.0019 0.0019
#>
#> V20
#> mean 0.6298 0.5025
#> std. dev. 0.2423 0.2631
#> weight sum 72 67
#> precision 0.0069 0.0069
#>
#> V21
#> mean 0.6786 0.5522
#> std. dev. 0.2435 0.2479
#> weight sum 72 67
#> precision 0.0072 0.0072
#>
#> V22
#> mean 0.6801 0.5812
#> std. dev. 0.243 0.2448
#> weight sum 72 67
#> precision 0.0067 0.0067
#>
#> V23
#> mean 0.6813 0.603
#> std. dev. 0.2648 0.2458
#> weight sum 72 67
#> precision 0.0072 0.0072
#>
#> V24
#> mean 0.6915 0.6407
#> std. dev. 0.2592 0.2336
#> weight sum 72 67
#> precision 0.0073 0.0073
#>
#> V25
#> mean 0.6921 0.6678
#> std. dev. 0.2396 0.2369
#> weight sum 72 67
#> precision 0.0073 0.0073
#>
#> V26
#> mean 0.7185 0.6971
#> std. dev. 0.2354 0.216
#> weight sum 72 67
#> precision 0.007 0.007
#>
#> V27
#> mean 0.7187 0.6826
#> std. dev. 0.2684 0.2149
#> weight sum 72 67
#> precision 0.0074 0.0074
#>
#> V28
#> mean 0.7163 0.6528
#> std. dev. 0.2536 0.2044
#> weight sum 72 67
#> precision 0.0075 0.0075
#>
#> V29
#> mean 0.6366 0.6068
#> std. dev. 0.2445 0.2444
#> weight sum 72 67
#> precision 0.0074 0.0074
#>
#> V3
#> mean 0.0556 0.0384
#> std. dev. 0.0494 0.0313
#> weight sum 72 67
#> precision 0.0024 0.0024
#>
#> V30
#> mean 0.5625 0.5503
#> std. dev. 0.2042 0.2465
#> weight sum 72 67
#> precision 0.0069 0.0069
#>
#> V31
#> mean 0.4809 0.4999
#> std. dev. 0.2149 0.211
#> weight sum 72 67
#> precision 0.0063 0.0063
#>
#> V32
#> mean 0.4298 0.4278
#> std. dev. 0.2226 0.2206
#> weight sum 72 67
#> precision 0.0064 0.0064
#>
#> V33
#> mean 0.3984 0.4236
#> std. dev. 0.211 0.2174
#> weight sum 72 67
#> precision 0.0069 0.0069
#>
#> V34
#> mean 0.3765 0.4504
#> std. dev. 0.2164 0.2403
#> weight sum 72 67
#> precision 0.0068 0.0068
#>
#> V35
#> mean 0.3447 0.4706
#> std. dev. 0.2552 0.2474
#> weight sum 72 67
#> precision 0.0072 0.0072
#>
#> V36
#> mean 0.328 0.4867
#> std. dev. 0.2373 0.2543
#> weight sum 72 67
#> precision 0.0072 0.0072
#>
#> V37
#> mean 0.3187 0.4491
#> std. dev. 0.2241 0.2501
#> weight sum 72 67
#> precision 0.0067 0.0067
#>
#> V38
#> mean 0.3378 0.371
#> std. dev. 0.1886 0.2249
#> weight sum 72 67
#> precision 0.007 0.007
#>
#> V39
#> mean 0.3488 0.3181
#> std. dev. 0.1827 0.2168
#> weight sum 72 67
#> precision 0.0068 0.0068
#>
#> V4
#> mean 0.0701 0.0447
#> std. dev. 0.0616 0.0337
#> weight sum 72 67
#> precision 0.0032 0.0032
#>
#> V40
#> mean 0.3082 0.3236
#> std. dev. 0.163 0.1907
#> weight sum 72 67
#> precision 0.0067 0.0067
#>
#> V41
#> mean 0.2959 0.2963
#> std. dev. 0.1678 0.1881
#> weight sum 72 67
#> precision 0.0063 0.0063
#>
#> V42
#> mean 0.32 0.2572
#> std. dev. 0.177 0.1776
#> weight sum 72 67
#> precision 0.0059 0.0059
#>
#> V43
#> mean 0.2962 0.2179
#> std. dev. 0.1479 0.1414
#> weight sum 72 67
#> precision 0.0056 0.0056
#>
#> V44
#> mean 0.2653 0.1797
#> std. dev. 0.151 0.1194
#> weight sum 72 67
#> precision 0.0058 0.0058
#>
#> V45
#> mean 0.2668 0.1507
#> std. dev. 0.1855 0.0996
#> weight sum 72 67
#> precision 0.0052 0.0052
#>
#> V46
#> mean 0.2113 0.1256
#> std. dev. 0.1647 0.0989
#> weight sum 72 67
#> precision 0.0054 0.0054
#>
#> V47
#> mean 0.1528 0.0979
#> std. dev. 0.1025 0.0713
#> weight sum 72 67
#> precision 0.0041 0.0041
#>
#> V48
#> mean 0.1139 0.0724
#> std. dev. 0.0704 0.0518
#> weight sum 72 67
#> precision 0.0024 0.0024
#>
#> V49
#> mean 0.0662 0.0411
#> std. dev. 0.0388 0.0335
#> weight sum 72 67
#> precision 0.0015 0.0015
#>
#> V5
#> mean 0.0904 0.0669
#> std. dev. 0.0641 0.0521
#> weight sum 72 67
#> precision 0.003 0.003
#>
#> V50
#> mean 0.0243 0.0192
#> std. dev. 0.0152 0.0126
#> weight sum 72 67
#> precision 0.0008 0.0008
#>
#> V51
#> mean 0.0199 0.0129
#> std. dev. 0.0154 0.0094
#> weight sum 72 67
#> precision 0.0009 0.0009
#>
#> V52
#> mean 0.0172 0.0107
#> std. dev. 0.0125 0.0076
#> weight sum 72 67
#> precision 0.0007 0.0007
#>
#> V53
#> mean 0.0125 0.0093
#> std. dev. 0.0083 0.006
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V54
#> mean 0.0134 0.0095
#> std. dev. 0.0086 0.0057
#> weight sum 72 67
#> precision 0.0003 0.0003
#>
#> V55
#> mean 0.0114 0.0085
#> std. dev. 0.0091 0.0051
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V56
#> mean 0.0089 0.0071
#> std. dev. 0.0063 0.0047
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V57
#> mean 0.0081 0.0079
#> std. dev. 0.0063 0.0049
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V58
#> mean 0.0103 0.007
#> std. dev. 0.0085 0.005
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V59
#> mean 0.009 0.0073
#> std. dev. 0.0067 0.0054
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V6
#> mean 0.1124 0.1023
#> std. dev. 0.0534 0.071
#> weight sum 72 67
#> precision 0.0028 0.0028
#>
#> V60
#> mean 0.0063 0.0062
#> std. dev. 0.0046 0.0037
#> weight sum 72 67
#> precision 0.0003 0.0003
#>
#> V7
#> mean 0.1249 0.1165
#> std. dev. 0.0576 0.0709
#> weight sum 72 67
#> precision 0.0028 0.0028
#>
#> V8
#> mean 0.1561 0.122
#> std. dev. 0.0899 0.0827
#> weight sum 72 67
#> precision 0.0034 0.0034
#>
#> V9
#> mean 0.2214 0.1404
#> std. dev. 0.1264 0.0962
#> weight sum 72 67
#> precision 0.005 0.005
#>
#>
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.3768116