Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Artificial Intelligence (AI) technologies have recently been applied to medical imaging for diagnostic support. With respect to fetal ultrasound screening of congenital heart disease (CHD), it is still challenging to achieve consistently accurate diagnoses owing to its manual operation and the technical differences among examiners. Hence, we proposed an architecture of Supervised Object detection with Normal data Only (SONO), based on a convolutional neural network (CNN), to detect cardiac substructures and structural abnormalities in fetal ultrasound videos. We used a barcode-like timeline to visualize the probability of detection and calculated an abnormality score of each video. Performance evaluations of detecting cardiac structural abnormalities utilized videos of sequential cross-sections around a four-chamber view (Heart) and three-vessel trachea view (Vessels). The mean value of abnormality scores in CHD cases was significantly higher than normal cases (p < 0.001). The areas under the receiver operating characteristic curve in Heart and Vessels produced by SONO were 0.787 and 0.891, respectively, higher than the other conventional algorithms. SONO achieves an automatic detection of each cardiac substructure in fetal ultrasound videos, and shows an applicability to detect cardiac structural abnormalities. The barcode-like timeline is informative for examiners to capture the clinical characteristic of each case, and it is also expected to acquire one of the important features in the field of medical AI: the development of “explainable AI.”
Image segmentation is the pixel-by-pixel detection of objects, which is the most challenging but informative in the fundamental tasks of machine learning including image classification and object detection. Pixel-by-pixel segmentation is required to apply machine learning to support fetal cardiac ultrasound screening; we have to detect cardiac substructures precisely which are small and change shapes dynamically with fetal heartbeats, such as the ventricular septum. This task is difficult for general segmentation methods such as DeepLab v3+, and U-net. Hence, here we proposed a novel segmentation method named Cropping-Segmentation-Calibration (CSC) that is specific to the ventricular septum in ultrasound videos in this study. CSC employs the time-series information of videos and specific section information to calibrate the output of U-net. The actual sections of the ventricular septum were annotated in 615 frames from 421 normal fetal cardiac ultrasound videos of 211 pregnant women who were screened. The dataset was assigned a ratio of 2:1, which corresponded to a ratio of the training to test data, and three-fold cross-validation was conducted. The segmentation results of DeepLab v3+, U-net, and CSC were evaluated using the values of the mean intersection over union (mIoU), which were 0.0224, 0.1519, and 0.5543, respectively. The results reveal the superior performance of CSC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.