To delineate image data curation needs and describe a locally designed graphical user interface (GUI) to aid radiologists in image annotation for artificial intelligence (AI) applications in medical imaging.
Materials and Methods:GUI components support image analysis toolboxes, picture archiving and communication system integration, third-party applications, processing of scripting languages, and integration of deep learning libraries. For clinical AI applications, GUI components included two-dimensional segmentation and classification; three-dimensional segmentation and quantification; and threedimensional segmentation, quantification, and classification. To assess radiologist engagement and performance efficiency associated with GUI-related capabilities, image annotation rate (studies per day) and speed (minutes per case) were evaluated in two clinical scenarios of varying complexity: hip fracture detection and coronary atherosclerotic plaque demarcation and stenosis grading.Results: For hip fracture, 1050 radiographs were annotated over 7 days (150 studies per day; median speed: 10 seconds per study [interquartile range, 3-21 seconds per study]). A total of 294 coronary CT angiographic studies with 1843 arteries and branches were annotated for atherosclerotic plaque over 23 days (15.2 studies [80.1 vessels] per day; median speed: 6.08 minutes per study [interquartile range, 2.8-10.6 minutes per study] and 73 seconds per vessel [interquartile range, 20.9-155 seconds per vessel]).
Conclusion:GUI-component compatibility with common image analysis tools facilitates radiologist engagement in image data curation, including image annotation, supporting AI application development and evolution for medical imaging. When complemented by other GUI elements, a continuous integrated workflow supporting formation of an agile deep neural network life cycle results.
We propose a 4D convolutional neural network (CNN) for the segmentation of retrospective ECG-gated cardiac CT, a series of single-channel volumetric data over time. While only a small subset of volumes in the temporal sequence are annotated, we define a sparse loss function on available labels to allow the network to leverage unlabeled images during training and generate a fully segmented sequence. We investigate the accuracy of the proposed 4D network to predict temporally consistent segmentations and compare with traditional 3D segmentation approaches. We demonstrate the feasibility of the 4D CNN and establish its performance on cardiac 4D CCTA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.