2021
DOI: 10.1155/2021/5581806
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Study on Heart Disease Prediction Using Feature Selection Techniques on Classification Algorithms

Abstract: Heart disease is recognized as one of the leading factors of death rate worldwide. Biomedical instruments and various systems in hospitals have massive quantities of clinical data. Therefore, understanding the data related to heart disease is very important to improve prediction accuracy. This article has conducted an experimental evaluation of the performance of models created using classification algorithms and relevant features selected using various feature selection approaches. For results of the explorat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(28 citation statements)
references
References 19 publications
0
18
1
Order By: Relevance
“…DT with forward, backward and bidirectional feature selections are the highest accuracy 98.2. In a significant comparison, the accuracy of our proposed model is found to be better than the recent studies [33] and [34], which were 88.52 and 85.8, respectively.…”
Section: Resultscontrasting
confidence: 52%
“…DT with forward, backward and bidirectional feature selections are the highest accuracy 98.2. In a significant comparison, the accuracy of our proposed model is found to be better than the recent studies [33] and [34], which were 88.52 and 85.8, respectively.…”
Section: Resultscontrasting
confidence: 52%
“…Heart failure is a clinical syndrome characterized by a reduced ability of the heart to pump blood to other parts of the body or fill with blood [1], [2]. Heart failure leads to fatigue, shortness of breath, and poor quality of life.…”
Section: Introductionmentioning
confidence: 99%
“…However, they are limited in that they use a "one size fits all" approach [101]. Various methods of scoring features can be used, such as Pearson Correlation Coefficient, Mutual Information, or analysis of variance (ANOVA) [100,102]. For continuous input and categorical output, the set of input and output used in this work, ANOVA F-Test is a suitable filter method [102].…”
Section: Filter Methods -Anova F Testmentioning
confidence: 99%
“…The ANOVA F-Test selects for features which have the highest ratio of variance between groups, and variance within groups. It checks for variances between all features in the feature group, and selects those that have distributions with minimal overlap [102]. By selecting features with minimal or no overlap, we can attempt to provide features to the model that it can more easily distinguish between two groups of data [102].…”
Section: Filter Methods -Anova F Testmentioning
confidence: 99%
See 1 more Smart Citation