Classification Naive Bayes Learner From Weka
Source:R/learner_RWeka_classif_naive_bayes_weka.R
mlr_learners_classif.naive_bayes_weka.Rd
Naive Bayes Classifier Using Estimator Classes.
Calls RWeka::make_Weka_classifier()
from RWeka.
Custom mlr3 parameters
output_debug_info
:original id: output-debug-info
do_not_check_capabilities
:original id: do-not-check-capabilities
num_decimal_places
:original id: num-decimal-places
batch_size
:original id: batch-size
Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern
Parameters
Id | Type | Default | Levels | Range |
subset | untyped | - | - | |
na.action | untyped | - | - | |
K | logical | FALSE | TRUE, FALSE | - |
D | logical | FALSE | TRUE, FALSE | - |
O | logical | FALSE | TRUE, FALSE | - |
output_debug_info | logical | FALSE | TRUE, FALSE | - |
do_not_check_capabilities | logical | FALSE | TRUE, FALSE | - |
num_decimal_places | integer | 2 | \([1, \infty)\) | |
batch_size | integer | 100 | \([1, \infty)\) | |
options | untyped | NULL | - |
References
John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerClassif
-> LearnerClassifNaiveBayesWeka
Methods
Inherited methods
mlr3::Learner$base_learner()
mlr3::Learner$configure()
mlr3::Learner$encapsulate()
mlr3::Learner$format()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$print()
mlr3::Learner$reset()
mlr3::Learner$selected_features()
mlr3::Learner$train()
mlr3::LearnerClassif$predict_newdata_fast()
Method marshal()
Marshal the learner's model.
Arguments
...
(any)
Additional arguments passed tomlr3::marshal_model()
.
Method unmarshal()
Unmarshal the learner's model.
Arguments
...
(any)
Additional arguments passed tomlr3::unmarshal_model()
.
Examples
# Define the Learner
learner = lrn("classif.naive_bayes_weka")
print(learner)
#>
#> ── <LearnerClassifNaiveBayesWeka> (classif.naive_bayes_weka): Naive Bayes ──────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> Naive Bayes Classifier
#>
#> Class
#> Attribute M R
#> (0.55) (0.45)
#> ===============================
#> V1
#> mean 0.0368 0.0218
#> std. dev. 0.0275 0.0124
#> weight sum 77 62
#> precision 0.0011 0.0011
#>
#> V10
#> mean 0.2455 0.1644
#> std. dev. 0.1312 0.1265
#> weight sum 77 62
#> precision 0.0051 0.0051
#>
#> V11
#> mean 0.2901 0.1766
#> std. dev. 0.1215 0.1192
#> weight sum 77 62
#> precision 0.0052 0.0052
#>
#> V12
#> mean 0.3087 0.2035
#> std. dev. 0.1217 0.1442
#> weight sum 77 62
#> precision 0.005 0.005
#>
#> V13
#> mean 0.3263 0.2384
#> std. dev. 0.127 0.1491
#> weight sum 77 62
#> precision 0.0053 0.0053
#>
#> V14
#> mean 0.3462 0.2943
#> std. dev. 0.1586 0.1762
#> weight sum 77 62
#> precision 0.0069 0.0069
#>
#> V15
#> mean 0.3462 0.334
#> std. dev. 0.1868 0.2308
#> weight sum 77 62
#> precision 0.0072 0.0072
#>
#> V16
#> mean 0.3935 0.3913
#> std. dev. 0.2066 0.2643
#> weight sum 77 62
#> precision 0.007 0.007
#>
#> V17
#> mean 0.4475 0.4249
#> std. dev. 0.2379 0.2902
#> weight sum 77 62
#> precision 0.0071 0.0071
#>
#> V18
#> mean 0.4979 0.4545
#> std. dev. 0.2502 0.2642
#> weight sum 77 62
#> precision 0.0071 0.0071
#>
#> V19
#> mean 0.5721 0.464
#> std. dev. 0.2557 0.256
#> weight sum 77 62
#> precision 0.0068 0.0068
#>
#> V2
#> mean 0.0441 0.0326
#> std. dev. 0.0377 0.0262
#> weight sum 77 62
#> precision 0.0018 0.0018
#>
#> V20
#> mean 0.6384 0.5006
#> std. dev. 0.2541 0.2621
#> weight sum 77 62
#> precision 0.0068 0.0068
#>
#> V21
#> mean 0.681 0.5375
#> std. dev. 0.256 0.2461
#> weight sum 77 62
#> precision 0.0071 0.0071
#>
#> V22
#> mean 0.692 0.5593
#> std. dev. 0.2459 0.2561
#> weight sum 77 62
#> precision 0.0072 0.0072
#>
#> V23
#> mean 0.6941 0.5948
#> std. dev. 0.2554 0.2461
#> weight sum 77 62
#> precision 0.0071 0.0071
#>
#> V24
#> mean 0.7053 0.637
#> std. dev. 0.2481 0.2437
#> weight sum 77 62
#> precision 0.0073 0.0073
#>
#> V25
#> mean 0.6989 0.6471
#> std. dev. 0.2324 0.2664
#> weight sum 77 62
#> precision 0.0075 0.0075
#>
#> V26
#> mean 0.7135 0.6695
#> std. dev. 0.2279 0.2461
#> weight sum 77 62
#> precision 0.0069 0.0069
#>
#> V27
#> mean 0.7081 0.677
#> std. dev. 0.2615 0.2266
#> weight sum 77 62
#> precision 0.0076 0.0076
#>
#> V28
#> mean 0.6838 0.6676
#> std. dev. 0.2642 0.2001
#> weight sum 77 62
#> precision 0.0074 0.0074
#>
#> V29
#> mean 0.6235 0.6196
#> std. dev. 0.2594 0.2332
#> weight sum 77 62
#> precision 0.0074 0.0074
#>
#> V3
#> mean 0.052 0.039
#> std. dev. 0.0486 0.0335
#> weight sum 77 62
#> precision 0.0023 0.0023
#>
#> V30
#> mean 0.5556 0.5648
#> std. dev. 0.2176 0.2261
#> weight sum 77 62
#> precision 0.007 0.007
#>
#> V31
#> mean 0.4634 0.5185
#> std. dev. 0.2297 0.1928
#> weight sum 77 62
#> precision 0.0066 0.0066
#>
#> V32
#> mean 0.4153 0.4292
#> std. dev. 0.2256 0.2129
#> weight sum 77 62
#> precision 0.0063 0.0063
#>
#> V33
#> mean 0.3998 0.4056
#> std. dev. 0.2005 0.2016
#> weight sum 77 62
#> precision 0.007 0.007
#>
#> V34
#> mean 0.3705 0.4095
#> std. dev. 0.2128 0.2479
#> weight sum 77 62
#> precision 0.0069 0.0069
#>
#> V35
#> mean 0.3422 0.4314
#> std. dev. 0.2358 0.2591
#> weight sum 77 62
#> precision 0.0071 0.0071
#>
#> V36
#> mean 0.3222 0.4551
#> std. dev. 0.2314 0.2653
#> weight sum 77 62
#> precision 0.0073 0.0073
#>
#> V37
#> mean 0.3146 0.4235
#> std. dev. 0.2113 0.2546
#> weight sum 77 62
#> precision 0.0066 0.0066
#>
#> V38
#> mean 0.3245 0.3743
#> std. dev. 0.1806 0.2339
#> weight sum 77 62
#> precision 0.007 0.007
#>
#> V39
#> mean 0.3301 0.3166
#> std. dev. 0.1704 0.2314
#> weight sum 77 62
#> precision 0.007 0.007
#>
#> V4
#> mean 0.0705 0.0427
#> std. dev. 0.06 0.0356
#> weight sum 77 62
#> precision 0.0033 0.0033
#>
#> V40
#> mean 0.3055 0.3145
#> std. dev. 0.1597 0.2012
#> weight sum 77 62
#> precision 0.0067 0.0067
#>
#> V41
#> mean 0.3019 0.2899
#> std. dev. 0.1598 0.1824
#> weight sum 77 62
#> precision 0.0063 0.0063
#>
#> V42
#> mean 0.3166 0.2586
#> std. dev. 0.165 0.1626
#> weight sum 77 62
#> precision 0.0059 0.0059
#>
#> V43
#> mean 0.2791 0.2142
#> std. dev. 0.1354 0.1085
#> weight sum 77 62
#> precision 0.0053 0.0053
#>
#> V44
#> mean 0.2395 0.1698
#> std. dev. 0.1444 0.079
#> weight sum 77 62
#> precision 0.0042 0.0042
#>
#> V45
#> mean 0.2351 0.1409
#> std. dev. 0.1691 0.0841
#> weight sum 77 62
#> precision 0.0047 0.0047
#>
#> V46
#> mean 0.1876 0.1175
#> std. dev. 0.1491 0.0938
#> weight sum 77 62
#> precision 0.0054 0.0054
#>
#> V47
#> mean 0.1397 0.0985
#> std. dev. 0.0911 0.0677
#> weight sum 77 62
#> precision 0.0041 0.0041
#>
#> V48
#> mean 0.1063 0.0743
#> std. dev. 0.0648 0.0491
#> weight sum 77 62
#> precision 0.0024 0.0024
#>
#> V49
#> mean 0.0608 0.0423
#> std. dev. 0.0344 0.032
#> weight sum 77 62
#> precision 0.0015 0.0015
#>
#> V5
#> mean 0.0976 0.0612
#> std. dev. 0.0651 0.051
#> weight sum 77 62
#> precision 0.003 0.003
#>
#> V50
#> mean 0.0223 0.0194
#> std. dev. 0.0137 0.0142
#> weight sum 77 62
#> precision 0.0007 0.0007
#>
#> V51
#> mean 0.0183 0.0128
#> std. dev. 0.0131 0.0092
#> weight sum 77 62
#> precision 0.0009 0.0009
#>
#> V52
#> mean 0.0154 0.011
#> std. dev. 0.0109 0.0067
#> weight sum 77 62
#> precision 0.0006 0.0006
#>
#> V53
#> mean 0.0123 0.0097
#> std. dev. 0.0081 0.0062
#> weight sum 77 62
#> precision 0.0004 0.0004
#>
#> V54
#> mean 0.0127 0.0097
#> std. dev. 0.0086 0.0052
#> weight sum 77 62
#> precision 0.0003 0.0003
#>
#> V55
#> mean 0.0101 0.0092
#> std. dev. 0.0087 0.0055
#> weight sum 77 62
#> precision 0.0004 0.0004
#>
#> V56
#> mean 0.0094 0.007
#> std. dev. 0.0064 0.0046
#> weight sum 77 62
#> precision 0.0004 0.0004
#>
#> V57
#> mean 0.0081 0.0075
#> std. dev. 0.0061 0.0053
#> weight sum 77 62
#> precision 0.0004 0.0004
#>
#> V58
#> mean 0.0094 0.0062
#> std. dev. 0.008 0.0044
#> weight sum 77 62
#> precision 0.0004 0.0004
#>
#> V59
#> mean 0.0089 0.0068
#> std. dev. 0.0072 0.0048
#> weight sum 77 62
#> precision 0.0004 0.0004
#>
#> V6
#> mean 0.1159 0.0995
#> std. dev. 0.0532 0.0725
#> weight sum 77 62
#> precision 0.0028 0.0028
#>
#> V60
#> mean 0.0069 0.006
#> std. dev. 0.0063 0.0037
#> weight sum 77 62
#> precision 0.0006 0.0006
#>
#> V7
#> mean 0.1277 0.1236
#> std. dev. 0.0546 0.0712
#> weight sum 77 62
#> precision 0.0028 0.0028
#>
#> V8
#> mean 0.1439 0.1273
#> std. dev. 0.0717 0.0853
#> weight sum 77 62
#> precision 0.0031 0.0031
#>
#> V9
#> mean 0.2067 0.1476
#> std. dev. 0.1095 0.1129
#> weight sum 77 62
#> precision 0.0042 0.0042
#>
#>
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.2898551