Deformable shape models (DSMs) comprise a general approach that shows great promise for automatic image segmentation. Published studies by others and our own research results strongly suggest that segmentation of a normal or near-normal object from 3D medical images will be most successful when the DSM approach uses (1) knowledge of the geometry of not only the target anatomic object but also the ensemble of objects providing context for the target object and (2) knowledge of the image intensities to be expected relative to the geometry of the target and contextual objects. The segmentation will be most efficient when the deformation operates at multiple object-related scales and uses deformations that include not just local translations but the biologically important transformations of bending and twisting, i.e., local rotation, and local magnification. In computer vision an important class of DSM methods uses explicit geometric models in a Bayesian statistical framework to provide a priori information used in posterior optimization to match the DSM against a target image. In this approach a DSM of the object to be segmented is placed in the target image data and undergoes a series of rigid and nonrigid transformations that deform the model to closely match the target object. The deformation process is driven by optimizing an objective function that has terms for the geometric typicality and model-to-image match for each instance of the deformed model. The success of this approach depends strongly on the object representation, i.e., the structural details and parameter set for the DSM, which in turn determines the analytic form of the objective function. This paper describes a form of DSM called m-reps that has or allows these properties, and a method of segmentation consisting of large to small scale posterior optimization of m-reps. Segmentation by deformable m-reps, together with the appropriate data representations, visualizations, and user interface, has been implemented in software that accomplishes 3D segmentations in a few minutes. Software for building and training models has also been developed. The methods underlying this software and its abilities are the subject of this paper.
Abstract. Automated medical image segmentation is a challenging task that benefits from the use of effective image appearance models. In this paper, we compare appearance models at three regional scales for statistically characterizing image intensity near object boundaries in the context of segmentation via deformable models. The three models capture appearance in the form of regional intensity quantile functions. These distribution-based regional image descriptors are amenable to Euclidean methods such as principal component analysis, which we use to build the statistical appearance models.The first model uses two regions, the interior and exterior of the organ of interest. The second model accounts for exterior inhomogeneity by clustering on object-relative local intensity quantile functions to determine tissue-consistent regions relative to the organ boundary. The third model analyzes these image descriptors per geometrically defined local region.To evaluate the three models, we present segmentation results on bladders and prostates in CT in the context of day-to-day adaptive radiotherapy for the treatment of prostate cancer. Results show improved segmentations with more local regions, probably because smaller regions better represent local inhomogeneity in the intensity distribution near the organ boundary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.