ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8682668
|View full text |Cite
|
Sign up to set email alerts
|

Short-segment Heart Sound Classification Using an Ensemble of Deep Convolutional Neural Networks

Abstract: This paper proposes a framework based on deep convolutional neural networks (CNNs) for automatic heart sound classification using short-segments of individual heart beats. We design a 1D-CNN that directly learns features from raw heart-sound signals, and a 2D-CNN that takes inputs of twodimensional time-frequency feature maps based on Mel-frequency cepstral coefficients (MFCC). We further develop a time-frequency CNN ensemble (TF-ECNN) combining the 1D-CNN and 2D-CNN based on score-level fusion of the class pr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
48
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 71 publications
(49 citation statements)
references
References 19 publications
1
48
0
Order By: Relevance
“…For this reason, heart signals have been critically studied to make a diagnosis[10][11][12][13]. CNN is considered a state-of-the-art tool for detecting and classification heart signals and has been studied with several variations like 1-dimensional, 2-dimensional, or the combination of both[14,15]. Similarly, Noman et al[15] proposed a framework based on 1-dimensional CNN for direct feature learning from raw heart signals and 2-dimensional CNN, which takes 2-dimensional time-frequency feature maps.…”
mentioning
confidence: 99%
“…For this reason, heart signals have been critically studied to make a diagnosis[10][11][12][13]. CNN is considered a state-of-the-art tool for detecting and classification heart signals and has been studied with several variations like 1-dimensional, 2-dimensional, or the combination of both[14,15]. Similarly, Noman et al[15] proposed a framework based on 1-dimensional CNN for direct feature learning from raw heart signals and 2-dimensional CNN, which takes 2-dimensional time-frequency feature maps.…”
mentioning
confidence: 99%
“…Considering the sound processing field, few studies have been conducted in this field, especially classifying and grading fluids sounds, except the analysis and recognition of human voices, which is a highly active area and works such as speech [34][35][36][37][38][39][40] recognition have been performed in this regard. However, studies have been recently conducted on the classification of fluids sounds in other areas, including human heartbeat [41], urban sounds [42][43][44][45][46][47][48] play sounds, car horns, air conditioning sound, engine sounds, etc. ), and music [49][50][51][52][53][54][55][56], using deep learning.…”
Section: Fig1 Venn Diagram Of Artificial Intelligencementioning
confidence: 99%
“…The experiments in [26,28,[32][33][34][35][36][37]40] were performed with the Physionet database [20]. In [32], features were extracted in time, frequency, wavelet and statistics, obtaining a total of 29 features.…”
Section: Authormentioning
confidence: 99%