The liver is a common site for the development of primary (i.e., originating from the liver, e.g., hepatocellular carcinoma) or secondary (i.e., spread to the liver, e.g., colorectal cancer) tumor. Due to its complex background, heterogeneous, and diffusive shape, automatic segmentation of tumor remains a challenging task. So far, only the interactive method has been adopted to obtain the acceptable segmentation results of a liver tumor. In this paper, we design an Attention Hybrid Connection Network architecture which combines soft and hard attention mechanism and long and short skip connections. We also propose a cascade network based on the liver localization network, liver segmentation network, and tumor segmentation network to cope with this challenge. Simultaneously, the joint dice loss function is proposed to train the liver localization network to obtain the accurate 3D liver bounding box, and the focal binary cross entropy is used as a loss function to fine-tune the tumor segmentation network for detecting more potentially malignant tumor and reduce false positives. Our framework is trained using the 110 cases in the LiTS dataset and extensively evaluated by the 20 cases in the 3DIRCADb dataset and the 117 cases in the Clinical dataset, which indicates that the proposed method can achieve faster network convergence and accurate semantic segmentation and further demonstrate that the proposed method has a good clinical value.INDEX TERMS Liver tumor segmentation, deep convolutional neural network, feature fusion, attention mechanism.
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations –citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.