In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods.
Accurate segmentation of cervical cells in Pap smear images is an important step in automatic pre-cancer identification in the uterine cervix. One of the major segmentation challenges is overlapping of cytoplasm, which has not been well-addressed in previous studies. To tackle the overlapping issue, this paper proposes a learning-based method with robust shape priors to segment individual cell in Pap smear images to support automatic monitoring of changes in cells, which is a vital prerequisite of early detection of cervical cancer. We define this splitting problem as a discrete labeling task for multiple cells with a suitable cost function. The labeling results are then fed into our dynamic multi-template deformation model for further boundary refinement. Multi-scale deep convolutional networks are adopted to learn the diverse cell appearance features. We also incorporated high-level shape information to guide segmentation where cell boundary might be weak or lost due to cell overlapping. An evaluation carried out using two different datasets demonstrates the superiority of our proposed method over the state-of-the-art methods in terms of segmentation accuracy.
In this paper, we present a novel framework for dermoscopy image recognition via both a deep learning method and a local descriptor encoding strategy. Specifically, the deep representations of a rescaled dermoscopy image are first extracted via a very deep residual neural network (ResNet) pre-trained on a large natural image dataset. Then these local deep descriptors are aggregated by orderless visual statistic features based on fisher vector (FV) encoding to build a global image representation. Finally, the FV encoded representations are used to classify melanoma images using a support vector machine (SVM) with a Chi-squared kernel. Our proposed method is capable of generating more discriminative features to deal with large variations within melanoma classes as well as small variations between melanoma and non-melanoma classes with limited training data. Extensive experiments are performed to demonstrate the effectiveness of our proposed method. Comparisons with state-of-the-art methods show the superiority of our method using the publicly available ISBI 2016 Skin lesion challenge dataset.
The quality of ultrasound (US) images for the obstetric examination is crucial for accurate biometric measurement. However, manual quality control is a labor intensive process and often impractical in a clinical setting. To improve the efficiency of examination and alleviate the measurement error caused by improper US scanning operation and slice selection, a computerized fetal US image quality assessment (FUIQA) scheme is proposed to assist the implementation of US image quality control in the clinical obstetric examination. The proposed FUIQA is realized with two deep convolutional neural network models, which are denoted as L-CNN and C-CNN, respectively. The L-CNN aims to find the region of interest (ROI) of the fetal abdominal region in the US image. Based on the ROI found by the L-CNN, the C-CNN evaluates the image quality by assessing the goodness of depiction for the key structures of stomach bubble and umbilical vein. To further boost the performance of the L-CNN, we augment the input sources of the neural network with the local phase features along with the original US data. It will be shown that the heterogeneous input sources will help to improve the performance of the L-CNN. The performance of the proposed FUIQA is compared with the subjective image quality evaluation results from three medical doctors. With comprehensive experiments, it will be illustrated that the computerized assessment with our FUIQA scheme can be comparable to the subjective ratings from medical doctors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.