Statistical shape modeling is an important tool to characterize variation in anatomical morphology. Typical shapes of interest are measured using 3D imaging and a subsequent pipeline of registration, segmentation, and some extraction of shape features or projections onto some lower-dimensional shape space, which facilitates subsequent statistical analysis. Many methods for constructing compact shape representations have been proposed, but are often impractical due to the sequence of image preprocessing operations, which involve significant parameter tuning, manual delineation, and/or quality control by the users. We propose DeepSSM: a deep learning approach to extract a low-dimensional shape representation directly from 3D images, requiring virtually no parameter tuning or user assistance. DeepSSM uses a convolutional neural network (CNN) that simultaneously localizes the biological structure of interest, establishes correspondences, and projects these points onto a low-dimensional shape representation in the form of PCA loadings within a point distribution model. To overcome the challenge of the limited availability of training images with dense correspondences, we present a novel data augmentation procedure that uses existing correspondences on a relatively small set of processed images with shape statistics to create plausible training samples with known shape parameters. In this way, we leverage the limited CT/MRI scans (40-50) into thousands of images needed to train a deep neural net. After the training, the CNN automatically produces accurate low-dimensional shape representations for unseen images. We validate DeepSSM for three different applications pertaining to modeling pediatric cranial CT for characterization of metopic craniosynostosis, femur CT scans identifying morphologic deformities of the hip due to femoroacetabular impingement, and left atrium MRI scans for atrial fibrillation recurrence prediction.
The standard for diagnosing metopic craniosynostosis (CS) utilizes computed tomography (CT) imaging and physical exam, but there is no standardized method for determining disease severity. Previous studies using interfrontal angles have evaluated differences in specific skull landmarks; however, these measurements are difficult to readily ascertain in clinical practice and fail to assess the complete skull contour. This pilot project employs machine learning algorithms to combine statistical shape information with expert ratings to generate a novel objective method of measuring the severity of metopic CS. Expert ratings of normal and metopic skull CT images were collected. Skull-shape analysis was conducted using ShapeWorks software. Machine-learning was used to combine the expert ratings with our shape analysis model to predict the severity of metopic CS using CT images. Our model was then compared to the gold standard using interfrontal angles. Seventeen metopic skull CT images of patients 5 to 15 months old were assigned a severity by 18 craniofacial surgeons, and 65 nonaffected controls were included with a 0 severity. Our model accurately correlated the level of skull deformity with severity (P < 0.10) and predicted the severity of metopic CS more often than models using interfrontal angles (χ 2 = 5.46, P = 0.019). This is the first study that combines shape information with expert ratings to generate an objective measure of severity for metopic CS. This method may help clinicians easily quantify the severity and perform robust longitudinal assessments of the condition.
Left atrium shape has been shown to be an independent predictor of recurrence after atrial fibrillation (AF) ablation. Shape-based representation is imperative to such an estimation process, where correspondencebased representation offers the most flexibility and ease-of-computation for population-level shape statistics. Nonetheless, population-level shape representations in the form of image segmentation and correspondence models derived from cardiac MRI require significant human resources with sufficient anatomy-specific expertise. In this paper, we propose a machine learning approach that uses deep networks to estimate AF recurrence by predicting shape descriptors directly from MRI images, with NO image pre-processing involved. We also propose a novel data augmentation scheme to effectively train a deep network in a limited training data setting. We compare this new method of estimating shape descriptors from images with the state-of-the-art correspondence-based shape modeling that requires image segmentation and correspondence optimization. Results show that the proposed method and the current state-of-the-art produce statistically similar outcomes on AF recurrence, eliminating the need for expensive pre-processing pipelines and associated human labor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.