Automatic assessing the location and extent of liver and liver tumor is critical for radiologists, diagnosis and the clinical process. In recent years, a large number of variants of U-Net based on Multi-scale feature fusion are proposed to improve the segmentation performance for medical image segmentation. Unlike the previous works which extract the context information of medical image via applying the multi-scale feature fusion, we propose a novel network named Multi-scale Attention Net (MA-Net) by introducing self-attention mechanism into our method to adaptively integrate local features with their global dependencies. The MA-Net can capture rich contextual dependencies based on the attention mechanism. We design two blocks: Position-wise Attention Block (PAB) and Multi-scale Fusion Attention Block (MFAB). The PAB is used to model the feature interdependencies in spatial dimensions, which capture the spatial dependencies between pixels in a global view. In addition, the MFAB is to capture the channel dependencies between any feature map by multi-scale semantic feature fusion. We evaluate our method on the dataset of MICCAI 2017 LiTS Challenge. The proposed method achieves better performance than other state-of-the-art methods. The Dice values of liver and tumors segmentation are 0.960±0.03 and 0.749±0.08 respectively.
The upgraded version of intelligent image-activated cell sorting (iIACS) has enabled higher-throughput and more sensitive intelligent image-based sorting of single live cells from heterogeneous populations.
Diabetic retinopathy (DR) is a common chronic fundus disease, which has four different kinds of microvessel structure and microvascular lesions: microaneurysms (MAs), hemorrhages (HEs), hard exudates, and soft exudates. Accurate detection and counting of them are a basic but important work. The manual annotation of these lesions is a labor-intensive task in clinical analysis. To solve the problem, we proposed a novel segmentation method for different lesions in DR. Our method is based on a convolutional neural network and can be divided into encoder module, attention module, and decoder module, so we refer it as EAD-Net. After normalization and augmentation, the fundus images were sent to the EAD-Net for automated feature extraction and pixel-wise label prediction. Given the evaluation metrics based on the matching degree between detected candidates and ground truth lesions, our method achieved sensitivity of 92.77%, specificity of 99.98%, and accuracy of 99.97% on the e_ophtha_EX dataset and comparable AUPR (Area under Precision-Recall curve) scores on IDRiD dataset. Moreover, the results on the local dataset also show that our EAD-Net has better performance than original U-net in most metrics, especially in the sensitivity and F1-score, with nearly ten percent improvement. The proposed EAD-Net is a novel method based on clinical DR diagnosis. It has satisfactory results on the segmentation of four different kinds of lesions. These effective segmentations have important clinical significance in the monitoring and diagnosis of DR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.