2020
DOI: 10.1155/2020/8843963
|View full text |Cite
|
Sign up to set email alerts
|

Deep Layer Kernel Sparse Representation Network for the Detection of Heart Valve Ailments from the Time-Frequency Representation of PCG Recordings

Abstract: The heart valve ailments (HVAs) are due to the defects in the valves of the heart and if untreated may cause heart failure, clots, and even sudden cardiac death. Automated early detection of HVAs is necessary in the hospitals for proper diagnosis of pathological cases, to provide timely treatment, and to reduce the mortality rate. The heart valve abnormalities will alter the heart sound and murmurs which can be faithfully captured by phonocardiogram (PCG) recordings. In this paper, a time-frequency based deep … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 84 publications
0
6
0
Order By: Relevance
“…Yaseen et al employed Mel-frequency Cepstral Coefficients (MFCCs) combined with Discrete Wavelet Transform features as inputs to Deep Neural Network (DNN) classifiers, achieving an accuracy of 92.1% [80]. Ghosh et al extracted features from the time-frequency matrix of the heart sound recordings and input them into a Deep Layer Kernel Sparse Representation Network classifier, resulting in a 99.24% accuracy [57]. Abbas et al developed a novel attention-based transformer architecture that combines DL and vision transformer.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Yaseen et al employed Mel-frequency Cepstral Coefficients (MFCCs) combined with Discrete Wavelet Transform features as inputs to Deep Neural Network (DNN) classifiers, achieving an accuracy of 92.1% [80]. Ghosh et al extracted features from the time-frequency matrix of the heart sound recordings and input them into a Deep Layer Kernel Sparse Representation Network classifier, resulting in a 99.24% accuracy [57]. Abbas et al developed a novel attention-based transformer architecture that combines DL and vision transformer.…”
Section: Resultsmentioning
confidence: 99%
“…At last, 71 original articles were included. These studies can be broadly categorized into several groups: methods (15 papers, including heart sound segmentation [6, 7, 8, 9, 10, 11, 12, 13], noise cancellation [14, 15, 16], algorithm development [17, 18, 19], and database development [20]), cardiac murmurs detection (36 papers [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56]), valvular heart disease (6 papers [57, 58, 59, 60, 61, 62]), congenital heart disease (4 papers [63, 64, 65, 66]), heart failure (4 papers [67, 68, 69, 70]), coronary artery disease (2 papers [71, 72]), rheumatic heart disease (2 papers [73, 74]), and extracardiac applications (2 papers [75, 76]).…”
Section: Methodsmentioning
confidence: 99%
“…It's a combination of the application of short-time Fourier transform (STFT) with wavelet transform found by Mann et al [54]. In the few reported works, CT was employed to heart-sound recordings provided by the Github database (Not Phys-ioNet 2016) [55]. The pristine quality of the Github database was the most significant factor in increasing the classification accuracy regardless of the machine learning used [56,57].…”
Section: Discussionmentioning
confidence: 99%
“…Features are characteristics of the human brain that can be used to automatically identify and distinguish between objects, and they are similar in concept to variables in regression analysis. The features that can be recognized by machines are often in the form of numbers or symbols, while human experts extract physiological or pathological information from heart sounds through features such as the heart rate, heart rhythm, murmur timing and shape, heart sound frequency and the presence of additional heart sounds [ 19 ].…”
Section: Principles Of Ai-based Cardiac Auscultationmentioning
confidence: 99%