2019
DOI: 10.1088/1742-6596/1237/2/022116
|View full text |Cite
|
Sign up to set email alerts
|

Application of Improved Decision Tree C4.5 Algorithms in the Judgment of Diabetes Diagnostic Effectiveness

Abstract: C4.5 algorithm needs to scan data sets repeatedly when constructing decision tree, which leads to its inefficient operation. A C4.5 decision tree optimization algorithm based on data dimension reduction is proposed, which generates a more concise decision tree and improves the efficiency of the algorithm. At the same time, it eliminates the interference of redundant attributes and improves the accuracy of the algorithm prediction.The simulation results of the improved C4.5 algorithm and the traditional algorit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 1 publication
0
2
0
Order By: Relevance
“…In general, the C4.5 algorithm for building a decision tree is as follows: Select an attribute as the root, create branches for each value, divide the cases into branches, and repeat the process for each branch until all cases on the branch have the same class. The C4.5 algorithm continues to be developed to improve its efficiency in handling complex and large data (Liu et al, 2019).…”
Section: Data Processing and Analysismentioning
confidence: 99%
“…In general, the C4.5 algorithm for building a decision tree is as follows: Select an attribute as the root, create branches for each value, divide the cases into branches, and repeat the process for each branch until all cases on the branch have the same class. The C4.5 algorithm continues to be developed to improve its efficiency in handling complex and large data (Liu et al, 2019).…”
Section: Data Processing and Analysismentioning
confidence: 99%
“…The decision tree represents the data in a way that is easily understandable. The decision tree is feasible in a way that it can handle a large set of data and tackles both numerical and categorical variables [52][53][54][55][56]. The decision tree consists of a variety of algorithms such as C4.5, Iterative Dichotomiser 3 (ID3), Chi-Squared Automatic Interaction Detection (CHAID), Classification and Regression Tree (CART), etc., and each algorithm is based on a specific mechanism.…”
Section: Introductionmentioning
confidence: 99%