The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of [Formula: see text], yielding a mean Dice similarity coefficient of [Formula: see text], and an average symmetric surface distance of [Formula: see text] mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.
A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
The proposed model can efficiently delineate ambiguous liver edges from complex tissue backgrounds with reproducibility. Quantitative validations and comparative results demonstrate the accuracy and efficacy of the model.
Multimodal data fusion has shown great advantages in uncovering information that could be overlooked by using single modality. In this paper, we consider the integration of high-dimensional multi-modality imaging and genetic data for Alzheimer’s disease (AD) diagnosis. With a focus on taking advantage of both phenotype and genotype information, a novel structured sparsity, defined by ℓ1, p-norm (p > 1), regularized multiple kernel learning method is designed. Specifically, to facilitate structured feature selection and fusion from heterogeneous modalities and also capture feature-wise importance, we represent each feature with a distinct kernel as a basis, followed by grouping the kernels according to modalities. Then, an optimally combined kernel presentation of multimodal features is learned in a data-driven approach. Contrary to the Group Lasso (i.e., ℓ2, 1-norm penalty) which performs sparse group selection, the proposed regularizer enforced on kernel weights is to sparsely select concise feature set within each homogenous group and fuse the heterogeneous feature groups by taking advantage of dense norms. We have evaluated our method using data of subjects from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The effectiveness of the method is demonstrated by the clearly improved prediction diagnosis and also the discovered brain regions and SNPs relevant to AD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.