Classification Naive Bayes Learner From Weka
Source:R/learner_RWeka_classif_naive_bayes_weka.R
mlr_learners_classif.naive_bayes_weka.Rd
Naive Bayes Classifier Using Estimator Classes.
Calls RWeka::make_Weka_classifier()
from RWeka.
Custom mlr3 parameters
output_debug_info
:original id: output-debug-info
do_not_check_capabilities
:original id: do-not-check-capabilities
num_decimal_places
:original id: num-decimal-places
batch_size
:original id: batch-size
Reason for change: This learner contains changed ids of the following control arguments since their ids contain irregular pattern
Parameters
Id | Type | Default | Levels | Range |
subset | untyped | - | - | |
na.action | untyped | - | - | |
K | logical | FALSE | TRUE, FALSE | - |
D | logical | FALSE | TRUE, FALSE | - |
O | logical | FALSE | TRUE, FALSE | - |
output_debug_info | logical | FALSE | TRUE, FALSE | - |
do_not_check_capabilities | logical | FALSE | TRUE, FALSE | - |
num_decimal_places | integer | 2 | \([1, \infty)\) | |
batch_size | integer | 100 | \([1, \infty)\) | |
options | untyped | NULL | - |
References
John GH, Langley P (1995). “Estimating Continuous Distributions in Bayesian Classifiers.” In Eleventh Conference on Uncertainty in Artificial Intelligence, 338-345.
See also
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).Chapter in the mlr3book: https://mlr3book.mlr-org.com/basics.html#learners
mlr3learners for a selection of recommended learners.
mlr3cluster for unsupervised clustering learners.
mlr3pipelines to combine learners with pre- and postprocessing steps.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Super classes
mlr3::Learner
-> mlr3::LearnerClassif
-> LearnerClassifNaiveBayesWeka
Methods
Inherited methods
mlr3::Learner$base_learner()
mlr3::Learner$configure()
mlr3::Learner$encapsulate()
mlr3::Learner$format()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$print()
mlr3::Learner$reset()
mlr3::Learner$selected_features()
mlr3::Learner$train()
mlr3::LearnerClassif$predict_newdata_fast()
Method marshal()
Marshal the learner's model.
Arguments
...
(any)
Additional arguments passed tomlr3::marshal_model()
.
Method unmarshal()
Unmarshal the learner's model.
Arguments
...
(any)
Additional arguments passed tomlr3::unmarshal_model()
.
Examples
# Define the Learner
learner = lrn("classif.naive_bayes_weka")
print(learner)
#>
#> ── <LearnerClassifNaiveBayesWeka> (classif.naive_bayes_weka): Naive Bayes ──────
#> • Model: -
#> • Parameters: list()
#> • Packages: mlr3 and RWeka
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, factor, and ordered
#> • Encapsulation: none (fallback: -)
#> • Properties: marshal, missings, multiclass, and twoclass
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("sonar")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
print(learner$model)
#> Naive Bayes Classifier
#>
#> Class
#> Attribute M R
#> (0.52) (0.48)
#> ===============================
#> V1
#> mean 0.0386 0.0212
#> std. dev. 0.0304 0.0139
#> weight sum 72 67
#> precision 0.0011 0.0011
#>
#> V10
#> mean 0.243 0.1535
#> std. dev. 0.1421 0.1041
#> weight sum 72 67
#> precision 0.005 0.005
#>
#> V11
#> mean 0.274 0.1704
#> std. dev. 0.1158 0.108
#> weight sum 72 67
#> precision 0.0051 0.0051
#>
#> V12
#> mean 0.2871 0.1843
#> std. dev. 0.118 0.1342
#> weight sum 72 67
#> precision 0.0046 0.0046
#>
#> V13
#> mean 0.2892 0.2402
#> std. dev. 0.1277 0.1423
#> weight sum 72 67
#> precision 0.0051 0.0051
#>
#> V14
#> mean 0.2957 0.2975
#> std. dev. 0.144 0.1729
#> weight sum 72 67
#> precision 0.0059 0.0059
#>
#> V15
#> mean 0.3175 0.3335
#> std. dev. 0.1763 0.225
#> weight sum 72 67
#> precision 0.0073 0.0073
#>
#> V16
#> mean 0.3697 0.3973
#> std. dev. 0.1957 0.2695
#> weight sum 72 67
#> precision 0.0072 0.0072
#>
#> V17
#> mean 0.3945 0.4392
#> std. dev. 0.2366 0.2995
#> weight sum 72 67
#> precision 0.0073 0.0073
#>
#> V18
#> mean 0.4315 0.4645
#> std. dev. 0.2548 0.2728
#> weight sum 72 67
#> precision 0.0068 0.0068
#>
#> V19
#> mean 0.5094 0.4655
#> std. dev. 0.2583 0.263
#> weight sum 72 67
#> precision 0.0069 0.0069
#>
#> V2
#> mean 0.051 0.0311
#> std. dev. 0.0419 0.0261
#> weight sum 72 67
#> precision 0.0018 0.0018
#>
#> V20
#> mean 0.6008 0.4906
#> std. dev. 0.2661 0.2672
#> weight sum 72 67
#> precision 0.0069 0.0069
#>
#> V21
#> mean 0.652 0.5514
#> std. dev. 0.2687 0.2587
#> weight sum 72 67
#> precision 0.0071 0.0071
#>
#> V22
#> mean 0.642 0.5867
#> std. dev. 0.2488 0.265
#> weight sum 72 67
#> precision 0.007 0.007
#>
#> V23
#> mean 0.6391 0.6275
#> std. dev. 0.256 0.254
#> weight sum 72 67
#> precision 0.007 0.007
#>
#> V24
#> mean 0.6572 0.6724
#> std. dev. 0.2451 0.2338
#> weight sum 72 67
#> precision 0.0074 0.0074
#>
#> V25
#> mean 0.6449 0.6909
#> std. dev. 0.2501 0.2378
#> weight sum 72 67
#> precision 0.0073 0.0073
#>
#> V26
#> mean 0.6651 0.7126
#> std. dev. 0.2454 0.2196
#> weight sum 72 67
#> precision 0.0069 0.0069
#>
#> V27
#> mean 0.6705 0.6815
#> std. dev. 0.2815 0.2174
#> weight sum 72 67
#> precision 0.0075 0.0075
#>
#> V28
#> mean 0.6825 0.649
#> std. dev. 0.2735 0.2062
#> weight sum 72 67
#> precision 0.0075 0.0075
#>
#> V29
#> mean 0.6379 0.6109
#> std. dev. 0.2524 0.2389
#> weight sum 72 67
#> precision 0.0075 0.0075
#>
#> V3
#> mean 0.0546 0.0363
#> std. dev. 0.0502 0.03
#> weight sum 72 67
#> precision 0.0023 0.0023
#>
#> V30
#> mean 0.5983 0.5679
#> std. dev. 0.2186 0.2334
#> weight sum 72 67
#> precision 0.007 0.007
#>
#> V31
#> mean 0.5031 0.5173
#> std. dev. 0.2253 0.1987
#> weight sum 72 67
#> precision 0.0063 0.0063
#>
#> V32
#> mean 0.4425 0.4375
#> std. dev. 0.2191 0.22
#> weight sum 72 67
#> precision 0.0065 0.0065
#>
#> V33
#> mean 0.4191 0.4252
#> std. dev. 0.1951 0.2325
#> weight sum 72 67
#> precision 0.0069 0.0069
#>
#> V34
#> mean 0.3959 0.425
#> std. dev. 0.2131 0.2443
#> weight sum 72 67
#> precision 0.0067 0.0067
#>
#> V35
#> mean 0.3702 0.4442
#> std. dev. 0.2568 0.2573
#> weight sum 72 67
#> precision 0.0072 0.0072
#>
#> V36
#> mean 0.3452 0.4553
#> std. dev. 0.2565 0.2621
#> weight sum 72 67
#> precision 0.0072 0.0072
#>
#> V37
#> mean 0.3465 0.4092
#> std. dev. 0.238 0.2516
#> weight sum 72 67
#> precision 0.0066 0.0066
#>
#> V38
#> mean 0.3714 0.3299
#> std. dev. 0.2182 0.2255
#> weight sum 72 67
#> precision 0.007 0.007
#>
#> V39
#> mean 0.3648 0.2922
#> std. dev. 0.1944 0.2156
#> weight sum 72 67
#> precision 0.0068 0.0068
#>
#> V4
#> mean 0.0706 0.0429
#> std. dev. 0.0627 0.0313
#> weight sum 72 67
#> precision 0.0034 0.0034
#>
#> V40
#> mean 0.3217 0.3088
#> std. dev. 0.1769 0.1906
#> weight sum 72 67
#> precision 0.0067 0.0067
#>
#> V41
#> mean 0.3137 0.2874
#> std. dev. 0.1692 0.1787
#> weight sum 72 67
#> precision 0.0063 0.0063
#>
#> V42
#> mean 0.3184 0.2461
#> std. dev. 0.1674 0.1795
#> weight sum 72 67
#> precision 0.0057 0.0057
#>
#> V43
#> mean 0.2759 0.2096
#> std. dev. 0.1453 0.143
#> weight sum 72 67
#> precision 0.0057 0.0057
#>
#> V44
#> mean 0.242 0.174
#> std. dev. 0.1479 0.1149
#> weight sum 72 67
#> precision 0.0058 0.0058
#>
#> V45
#> mean 0.2497 0.1423
#> std. dev. 0.1742 0.1002
#> weight sum 72 67
#> precision 0.0045 0.0045
#>
#> V46
#> mean 0.2024 0.1134
#> std. dev. 0.1521 0.0961
#> weight sum 72 67
#> precision 0.0055 0.0055
#>
#> V47
#> mean 0.1484 0.0911
#> std. dev. 0.1009 0.071
#> weight sum 72 67
#> precision 0.0041 0.0041
#>
#> V48
#> mean 0.1174 0.0714
#> std. dev. 0.0691 0.0522
#> weight sum 72 67
#> precision 0.0024 0.0024
#>
#> V49
#> mean 0.0685 0.0398
#> std. dev. 0.037 0.0341
#> weight sum 72 67
#> precision 0.0015 0.0015
#>
#> V5
#> mean 0.0904 0.0599
#> std. dev. 0.0647 0.0462
#> weight sum 72 67
#> precision 0.003 0.003
#>
#> V50
#> mean 0.0239 0.0169
#> std. dev. 0.0141 0.0123
#> weight sum 72 67
#> precision 0.0007 0.0007
#>
#> V51
#> mean 0.0203 0.0129
#> std. dev. 0.0148 0.0084
#> weight sum 72 67
#> precision 0.0009 0.0009
#>
#> V52
#> mean 0.017 0.0102
#> std. dev. 0.0114 0.0068
#> weight sum 72 67
#> precision 0.0006 0.0006
#>
#> V53
#> mean 0.0116 0.0095
#> std. dev. 0.0074 0.0059
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V54
#> mean 0.0126 0.0094
#> std. dev. 0.0093 0.0055
#> weight sum 72 67
#> precision 0.0003 0.0003
#>
#> V55
#> mean 0.0106 0.0082
#> std. dev. 0.008 0.005
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V56
#> mean 0.0092 0.0077
#> std. dev. 0.0061 0.0048
#> weight sum 72 67
#> precision 0.0003 0.0003
#>
#> V57
#> mean 0.0078 0.0078
#> std. dev. 0.0052 0.0055
#> weight sum 72 67
#> precision 0.0003 0.0003
#>
#> V58
#> mean 0.0084 0.0068
#> std. dev. 0.0055 0.0048
#> weight sum 72 67
#> precision 0.0002 0.0002
#>
#> V59
#> mean 0.009 0.0071
#> std. dev. 0.0076 0.0047
#> weight sum 72 67
#> precision 0.0004 0.0004
#>
#> V6
#> mean 0.1074 0.0962
#> std. dev. 0.0547 0.0647
#> weight sum 72 67
#> precision 0.0028 0.0028
#>
#> V60
#> mean 0.0071 0.0061
#> std. dev. 0.0068 0.004
#> weight sum 72 67
#> precision 0.0005 0.0005
#>
#> V7
#> mean 0.1215 0.116
#> std. dev. 0.0577 0.0668
#> weight sum 72 67
#> precision 0.0028 0.0028
#>
#> V8
#> mean 0.1542 0.1195
#> std. dev. 0.0918 0.0826
#> weight sum 72 67
#> precision 0.0034 0.0034
#>
#> V9
#> mean 0.2117 0.1318
#> std. dev. 0.1339 0.0911
#> weight sum 72 67
#> precision 0.005 0.005
#>
#>
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.3768116