This article studies whether heart sound signals can be used for emotion recognition. First, we built a small emotion heart sound database, and simultaneously recorded the participants’ ECG for comparative analysis. Second, according to the characteristics of the heart sound signals, two emotion evaluation indicators were proposed: HRV of heart sounds (difference between successive heartbeats) and DSV of heart sounds (the ratio of diastolic to systolic duration variability). Then, we extracted linear and nonlinear features from two emotion evaluation indicators to recognize four kinds of emotions. Moreover, we used valence dimension, arousal dimension and valence-arousal synthesis as evaluation standards. The experimental results demonstrated that heart sound signals can be used for emotion recognition. It was more effective to achieve recognition results by combining the features of HRV and DSV of heart sounds. Finally, the average accuracy of four emotion recognitions on valence dimension, arousal dimension and valence-arousal synthesis was up to 96.875%, 88.5417% and 81.25%, respectively.
In this paper, a new method of biometric characterization of heart sounds based on multimodal multiscale dispersion entropy is proposed. Firstly, the heart sound is periodically segmented, and then each single-cycle heart sound is decomposed into a group of intrinsic mode functions (IMFs) by improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN). These IMFs are then segmented to a series of frames, which is used to calculate the refine composite multiscale dispersion entropy (RCMDE) as the characteristic representation of heart sound. In the simulation experiments I, carried out on the open heart sounds database Michigan, Washington and Littman, the feature representation method was combined with the heart sound segmentation method based on logistic regression (LR) and hidden semi-Markov models (HSMM), and feature selection was performed through the Fisher ratio (FR). Finally, the Euclidean distance (ED) and the close principle are used for matching and identification, and the recognition accuracy rate was 96.08%. To improve the practical application value of this method, the proposed method was applied to 80 heart sounds database constructed by 40 volunteer heart sounds to discuss the effect of single-cycle heart sounds with different starting positions on performance in experiment II. The experimental results show that the single-cycle heart sound with the starting position of the start of the first heart sound (S1) has the highest recognition rate of 97.5%. In summary, the proposed method is effective for heart sound biometric recognition.
To design a classification algorithm of heart sounds with low hardware requirements and applicability to mobile terminals, this paper proposes a laconic heart sound neural network (LHSNN). First, we propose three requirements that must be met in the LHSNN design. Then, the specific implementation method of the LHSNN is given as follows: 1) Using a spectrogram as the representation of the heart sound features, the size of the heart sound spectrum is determined according to the principle of lossless information. 2) According to the characteristics of the heart sounds and the design requirements, a neural network is selected and deeply analyzed. 3) Through the optimized method, the network structure satisfies the requirements for running on mobile terminals. Finally, the PhysioNet/CinC Challenge 2016 public heart sound database is used as the experimental object in order to establish a heart sound spectrum library. The experimental results show that the LHSNN can obtain the recognition rate of 96.16% and the modify accuracy of 0.8950, and it can also run on mobile terminals. In addition, the LHSNN has been proven to be adaptable by using the open heart sound dataset of the University of Catania. The research in this paper has positive significance for the classification and recognition of heart sounds in the natural environment.INDEX TERMS Heart sound, laconic neural network, sound spectrum, mobile terminal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.