2022
DOI: 10.28924/ada/stat.2.8
|View full text |Cite
|
Sign up to set email alerts
|

Multinomial Naïve Bayes Classifier: Bayesian versus Nonparametric Classifier Approach

Abstract: This paper proposes a Naïve Bayes Classifier for Bayesian and nonparametric methods of analyzing multinomial regression. The Naïve Bayes classifier adopted Bayes’ rule for solving the posterior of the multinomial regression via its link function known as Logit link. The nonparametric adopted Gaussian, bi-weight kernels, Silverman’s rule of thumb bandwidth selector, and adjusted bandwidth as kernel density estimation. Three categorical responses of information on 78 people using one of three diets (Diet A, B, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 7 publications
0
0
0
Order By: Relevance
“…The true weights for the three associated components are 0.5, 0.3, and 0.2 respectively. Their true means are (0,0), (5,5), and (-3,7) for the first, second and third components respectively for the simulated data of sample of size 120, such that their true sigma (variance) are (1,0), (2,0.9), and (1,-0.9) respectively. The starting guess value of the weights for each component was assigned to by equal weight of repetition (1,3)/3 via iteration of the sampler.…”
Section: Discussion Of Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The true weights for the three associated components are 0.5, 0.3, and 0.2 respectively. Their true means are (0,0), (5,5), and (-3,7) for the first, second and third components respectively for the simulated data of sample of size 120, such that their true sigma (variance) are (1,0), (2,0.9), and (1,-0.9) respectively. The starting guess value of the weights for each component was assigned to by equal weight of repetition (1,3)/3 via iteration of the sampler.…”
Section: Discussion Of Simulation Resultsmentioning
confidence: 99%
“…Consequently, it is not all the time that conjugate priors defined for certain likelihoods do give posterior forms of the likelihoods. In addition, the deduction that some non-informative priors usually accompany undefined posteriors irrespective of the sample size is a clear indicator of the complexity of Bayesian inference for some models [5].…”
Section: Introductionmentioning
confidence: 99%