2021
DOI: 10.1186/s13634-021-00742-6
|View full text |Cite
|
Sign up to set email alerts
|

Improved naive Bayes classification algorithm for traffic risk management

Abstract: Naive Bayesian classification algorithm is widely used in big data analysis and other fields because of its simple and fast algorithm structure. Aiming at the shortcomings of the naive Bayes classification algorithm, this paper uses feature weighting and Laplace calibration to improve it, and obtains the improved naive Bayes classification algorithm. Through numerical simulation, it is found that when the sample size is large, the accuracy of the improved naive Bayes classification algorithm is more than 99%, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
50
0
8

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 97 publications
(58 citation statements)
references
References 35 publications
0
50
0
8
Order By: Relevance
“…Two main scenarios for the implementation of the CPES will be pursued. According to the first scenario, the IBGA as a hybrid feature selection method will be tested against other modern selection methods using NB classifier as a standard classification method [ 29 , 30 , 31 ]. Then, the complete strategy called CPES will be tested against other modern diagnostic strategies.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Two main scenarios for the implementation of the CPES will be pursued. According to the first scenario, the IBGA as a hybrid feature selection method will be tested against other modern selection methods using NB classifier as a standard classification method [ 29 , 30 , 31 ]. Then, the complete strategy called CPES will be tested against other modern diagnostic strategies.…”
Section: Resultsmentioning
confidence: 99%
“…The modern selection methods used in the comparison are Genetic Algorithm (GA) [ 2 , 27 , 28 ], Feature Selection via Directional Outliers Correcting (FSDOC) [36] , Orthogonal Least Squares (OLS) based feature selection method [37] , the Modified Grasshopper Optimization Algorithm (MGOA) [38] , and Stochastic Diffusion Search (SDS) algorithm [29] . To evaluate these features selection methods, NB classifier is used as a standard method [ 29 , 30 , 31 ]. The figures (8→13) show the accuracy, error, precision, recall, and run-time of the used feature selection methods.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, KDE is an effective way of estimating probabilistic function of a given data when the distribution of the data is unknown. In other words, KDE makes it easy to solve the problem of non-Gaussian distribution when the dataset is of continuous or numeric attributes (Kaviani & Sunita, 2017;Chen et al, 2021).…”
mentioning
confidence: 99%