2017
DOI: 10.1016/j.cmpb.2016.10.020
|View full text |Cite
|
Sign up to set email alerts
|

The application of a decision tree to establish the parameters associated with hypertension

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
43
0
4

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 79 publications
(48 citation statements)
references
References 20 publications
1
43
0
4
Order By: Relevance
“…Variables that have the best rate of splitting criterion are selected as staying in the model. In the decision tree, the first variable or root node is the most important factor and other variables can be classified in order to importance . It can be stated also that the root node is the variable that can divide the whole population with the highest information gain.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Variables that have the best rate of splitting criterion are selected as staying in the model. In the decision tree, the first variable or root node is the most important factor and other variables can be classified in order to importance . It can be stated also that the root node is the variable that can divide the whole population with the highest information gain.…”
Section: Methodsmentioning
confidence: 99%
“…Data mining is a retrospective computational method for extracting knowledge from large databases. Data mining algorithms were applied to define new models for predicting the risk factors of hypertension . Decision tree is easy to implement and interpret.…”
Section: Introductionmentioning
confidence: 99%
“…A total of 15,323 records were initially examined and analyzed for potential construction of a decision tree approach. In order to meet the strict criteria for building data mining algorithms, the dataset in which certain variables were missing cannot be used since the decision tree approach will not work with missing data points [21]. In doing so, 28.6% of the total data (4348 out of 15,323 records) failed to meet the criteria for data selection and was thus excluded accordingly.…”
Section: Data Selectionmentioning
confidence: 99%
“…The sum is computed over m classes. In the DT, the first variable or root node is the most important variable and other variables would be in tree according to their importance . It can be stated also that the root node is the variable that can divide the whole population with the highest information gain.…”
Section: Methodsmentioning
confidence: 99%
“…In the DT, the first variable or root node is the most important variable and other variables would be in tree according to their importance. 15,25 It can be stated also that the root node is the variable that can divide the whole population with the highest information gain. It is a common approach in data mining methods to divide the data set into two parts; a training data set, generally 70% of the subjects, and the testing dataset, 30% of the subjects.…”
Section: Decision Treementioning
confidence: 99%