Breast tumor segmentation plays a crucial role in subsequent disease diagnosis, and most algorithms need interactive prior to firstly locate tumors and perform segmentation based on tumor-centric candidates. In this paper, we propose a fully convolutional network to achieve automatic segmentation of breast tumor in an end-to-end manner. Considering the diversity of shape and size for malignant tumors in the digital mammograms, we introduce multiscale image information into the fully convolutional dense network architecture to improve the segmentation precision. Multiple sampling rates of atrous convolution are concatenated to acquire different field-of-views of image features without adding additional number of parameters to avoid over fitting. Weighted loss function is also employed during training according to the proportion of the tumor pixels in the entire image, in order to weaken unbalanced classes problem. Qualitative and quantitative comparisons demonstrate that the proposed algorithm can achieve automatic tumor segmentation and has high segmentation precision for various size and shapes of tumor images without preprocessing and postprocessing.
Lung cancer ranks among the most common types of cancer. Noninvasive computer-aided diagnosis can enable large-scale rapid screening of potential patients with lung cancer. Deep learning methods have already been applied for the automatic diagnosis of lung cancer in the past. Due to restrictions caused by single modality images of dataset as well as the lack of approaches that allow for a reliable extraction of fine-grained features from different imaging modalities, research regarding the automated diagnosis of lung cancer based on noninvasive clinical images requires further study. In this paper, we present a deep learning architecture that combines the fine-grained feature from PET and CT images that allow for the noninvasive diagnosis of lung cancer. The multidimensional (regarding the channel as well as spatial dimensions) attention mechanism is used to effectively reduce feature noise when extracting fine-grained features from each imaging modality. We conduct a comparative analysis of the two aspects of feature fusion and attention mechanism through quantitative evaluation metrics and the visualization of deep learning process. In our experiments, we obtained an area under the ROC curve of 0.92 (balanced accuracy = 0.72) and a more focused network attention which shows the effective extraction of the fine-grained feature from each imaging modality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.