Objectives
A paucity of point‐of‐care ultrasound (POCUS) databases limits machine learning (ML). Assess feasibility of training ML algorithms to visually estimate left ventricular ejection fraction (EF) from a subxiphoid (SX) window using only apical 4‐chamber (A4C) images.
Methods
Researchers used a long‐short‐term‐memory algorithm for image analysis. Using the Stanford EchoNet‐Dynamic database of 10,036 A4C videos with calculated exact EF, researchers tested 3 ML training permeations. First, training on unaltered Stanford A4C videos, then unaltered and 90° clockwise (CW) rotated videos and finally unaltered, 90° rotated and horizontally flipped videos. As a real‐world test, we obtained 615 SX videos from Harbor‐UCLA (HUCLA) with EF calculations in 5% ranges. Researchers performed 1000 randomizations of EF point estimation within HUCLA EF ranges to compensate for ML and HUCLA EF mismatch, obtaining a mean value for absolute error (MAE) comparison and performed Bland–Altman analyses.
Results
The ML algorithm EF mean MAE was estimated at 23.0, with a range of 22.8–23.3 using unaltered A4C video, mean MAE was 16.7, with a range of 16.5–16.9 using unaltered and 90° CW rotated video, mean MAE was 16.6, with a range of 16.3–16.8 using unaltered, 90° CW rotated and horizontally flipped video training. Bland–Altman showed weakest agreement at 40–45% EF.
Conclusions
Researchers successfully adapted unrelated ultrasound window data to train a POCUS ML algorithm with fair MAE using data manipulation to simulate a different ultrasound examination. This may be important for future POCUS algorithm design to help overcome a paucity of POCUS databases.