Background
Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram.
Methods
We trained 2- and 3-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify 9 view categories [10,269 videos]. TTE studies from 229 patients were used in internal validation [2,582 videos]. CNNs were tested on 100 patients with comprehensive TTE studies [where the 2 examples chosen by CNNs as most likely to represent a view were evaluated] and 408 patients with five view categories obtained via point of care ultrasound (POCUS).
Results
The overall accuracy of the 2-dimensional CNN was 96.8% and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the 3-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with 2- rather than 3-dimensional networks, exceeding 93% in apical, short axis aortic valve, and parasternal long axis left ventricle views.
Conclusion
An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.