2021 5th International Conference on Trends in Electronics and Informatics (ICOEI) 2021
DOI: 10.1109/icoei51242.2021.9452898
|View full text |Cite
|
Sign up to set email alerts
|

A comprehensive study on pre-pruning and post-pruning methods of decision tree classification algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…The naïve Bayes algorithm predicts data using Bayes' Theorem calculations. Bayes' theorem is a statistical approach [15]. Bayes' Theorem calculations are carried out by calculating the probability of a hypothesis based on new evidence and data.…”
Section: Naïve Bayesmentioning
confidence: 99%
“…The naïve Bayes algorithm predicts data using Bayes' Theorem calculations. Bayes' theorem is a statistical approach [15]. Bayes' Theorem calculations are carried out by calculating the probability of a hypothesis based on new evidence and data.…”
Section: Naïve Bayesmentioning
confidence: 99%
“…To overcome this problem in ML models, it is necessary to prune and remove some nodes or branches. 26…”
Section: Dtmentioning
confidence: 99%
“…Decision trees are one of the most popular machine learning algorithms, dividing data repeatedly to form classes or groups [1]. Decision tree as a classification method is very effective [2], where classification tasks are modeling with a set of hierarchical decisions on feature variables in the form of a tree [3]. Classification algorithms in the decision tree include ID3, C4.5, and CART [4].…”
Section: Introductionmentioning
confidence: 99%
“…Overfitting makes the classifier decrease in accuracy due to failure to properly generalize unseen instances [7]. For this, it is necessary to pruning [2]. Pruning is the process of cutting or removing unwanted nodes and branches, overfitting the decision tree [8].…”
Section: Introductionmentioning
confidence: 99%