Aims The medical need for screening of aortic valve stenosis (AS), which leads to timely and appropriate medical intervention, is rapidly increasing because of the high prevalence of AS in elderly population. This study aimed to establish a screening method using understandable artificial intelligence (AI) to detect severe AS based on heart sounds and to package the built AI into a smartphone-application. Methods and Results In this diagnostic accuracy study, we developed multiple convolutional neural networks (CNNs) using a modified stratified 5-fold cross-validation to detect severe AS in electronic heart sound data recorded at three auscultation locations. Clinical validation was performed with the developed smartphone application in an independent cohort (model establishment: n = 556, clinical validation: n = 132). Our ensemble technique integrating the heart sounds from multiple auscultation locations increased the detection accuracy of CNN model by compensating detection errors. The established smartphone application achieved a sensitivity, specificity, accuracy, and F1 value of 97.6% (41/42), 94.4% (85/90), 95.7% (126/132), and 0.93, respectively, which were higher compared to the consensus of cardiologists (81.0%, 93.3%, 89.4%, and 0.829, respectively), implying a good utility for severe AS screening. The Grad-CAM++ demonstrated that the built AIs could focus on specific heart sounds to differentiate the severity of AS. Conclusions Our CNN model combining multiple auscultation locations and exported on smartphone application could efficiently identify severe AS based on heart sounds. The visual explanation of AI decisions for heart sounds was interpretable. These technologies may support medical training and remote consultations.
Background Aortic stenosis is still one of the major causes of sudden cardiac death in the elderly. Noninvasive screening for severe aortic valve stenosis (AS) may result in early cardiac diagnostic leading to an appropriate and timely medical intervention. Purpose The aims of this study were 1) to develop an artificial intelligence to detect severe AS based on heart sounds and 2) to build an application to screen patients using electronic stethoscope and smartphones, which will provide an efficient diagnostic workflow for screening as a complementary tool in daily clinical practice. Methods We enrolled 100 patients diagnosed with severe AS and 200 patients without severe AS (no echocardiographic sign of AS [n=100], mild AS [n=50], moderate AS [n=50]). The heart sounds were recorded in 4000 Hz waveform audio format at the following 3 sites of each patient; the 2nd intercostal right sternal border, the Erb's area and the apex. Each record was divided into multiple data of 4 seconds duration, which built 10800 sound records in total. We developed multiple convolutional neural networks (CNN) designed to recognize severe AS in heart sounds according to the recorded 3 sites. We adopted a stratified 4-fold cross-validation method by which the CNN was trained with 60% of the whole data, validated with 20% data and tested with the remaining 20% data not used during training and validation. As performance metrics we adopted the accuracy, F1 value and the area under the curve (AUC) calculated as the average of all cross-validation folds. For the smartphone application, we combined the best CNN-models from each recorded site for the best performance. Further 40 patients were newly enrolled for its clinical validation (no AS [n=10], mild AS [n=10], moderate AS [n=10], severe AS [n=10]). Results The accuracy, F1 value and AUC of each model were 88.9±5.7%, 0.888±0.006 and 0.953±0.008, respectively. The sensitivity and specificity were 87.9±2.2% and 89.9±2.4%. The recognition accuracy of moderate AS was significantly lower as compared to the other AS grades (moderate AS 74.1±6.1% vs no AS 98.0±1.4%, mild AS 97.6±1.2%, severe AS 87.9±2.2%, respectively, P<0.05). Our smartphone application showed a sensitivity of 100% (10/10), a specificity of 73.3% (22/30), and an accuracy of 80.0% (32/40), which implicated a good utility for screening. In the detailed analysis of 8 mistaken decisions, these were highly affected by the presence of severe mitral or tricuspid valve regurgitation despite of non-severe AS (7/8 [87.5%]). Conclusions This study demonstrated the promising possibility of an end-to-end screening for severe aortic valve stenosis using an electronic stethoscope and a smartphone application. This technology may improve the efficacy of daily medicine particularly where the human resource is limited or support a remote medical consultation. Further investigations are necessary to increase accuracy. Funding Acknowledgement Type of funding sources: None.
Background Aortic stenosis is still one of the major causes of sudden cardiac death in the elderly. Noninvasive screening for severe aortic valve stenosis (AS) may result in early cardiac diagnostic leading to an appropriate and timely medical intervention. Purpose The aims of this study were 1) to develop an artificial intelligence to detect severe AS based on heart sounds and 2) to build an application to screen patients using electronic stethoscope and smartphones, which will provide an efficient diagnostic workflow for screening as a complementary tool in daily clinical practice. Methods We enrolled 100 patients diagnosed with severe AS and 200 patients without severe AS (no echocardiographic sign of AS [n=100], mild AS [n=50], moderate AS [n=50]). The heart sounds were recorded in 4000 Hz waveform audio format at the following 3 sites of each patient; the 2nd intercostal right sternal border, the Erb's area and the apex. Each record was divided into multiple data of 4 seconds duration, which built 10800 sound records in total. We developed multiple convolutional neural networks (CNN) designed to recognize severe AS in heart sounds according to the recorded 3 sites. We adopted a stratified 4-fold cross-validation method by which the CNN was trained with 60% of the whole data, validated with 20% data and tested with the remaining 20% data not used during training and validation. As performance metrics we adopted the accuracy, F1 value and the area under the curve (AUC) calculated as the average of all cross-validation folds. For the smartphone application, we combined the best CNN-models from each recorded site for the best performance. Further 40 patients were newly enrolled for its clinical validation (no AS [n=10], mild AS [n=10], moderate AS [n=10], severe AS [n=10]). Results The accuracy, F1 value and AUC of each model were 88.9±5.7%, 0.888±0.006 and 0.953±0.008, respectively. The sensitivity and specificity were 87.9±2.2% and 89.9±2.4%. The recognition accuracy of moderate AS was significantly lower as compared to the other AS grades (moderate AS 74.1±6.1% vs no AS 98.0±1.4%, mild AS 97.6±1.2%, severe AS 87.9±2.2%, respectively, P<0.05). Our smartphone application showed a sensitivity of 100% (10/10), a specificity of 73.3% (22/30), and an accuracy of 80.0% (32/40), which implicated a good utility for screening. In the detailed analysis of 8 mistaken decisions, these were highly affected by the presence of severe mitral or tricuspid valve regurgitation despite of non-severe AS (7/8 [87.5%]). Conclusions This study demonstrated the promising possibility of an end-to-end screening for severe aortic valve stenosis using an electronic stethoscope and a smartphone application. This technology may improve the efficacy of daily medicine particularly where the human resource is limited or support a remote medical consultation. Further investigations are necessary to increase accuracy. FUNDunding Acknowledgement Type of funding sources: None.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.