This article presents an automatic identification method of mycobacterium tuberculosis with conventional microscopy images based on Red and Green color channels using global adaptive threshold segmentation. Differing from fluorescence microscopy, in the conventional microscopy the bacilli are not easily distinguished from the background. The key to the bacilli segmentation method employed in this work is the use of Red minus Green (R-G) images from RGB color format. In this image, the bacilli appear as white regions on a dark background. Some artifacts are present in the (R-G) segmented image. To remove them we used morphological, color and size filters. The best sensitivity achieved was about 76.65%. The main contribution of this work was the proposal of the first automatic identification method of tuberculosis bacilli for conventional light microscopy.
This article presents a systematic analysis of focus functions in conventional sputum smear microscopy for tuberculosis. This is the first step in the development of automatic microscopy. Nine autofocus functions are analyzed in a set of 1200 images with varying degrees of content density. These functions were evaluated using quantitative procedures. The main accomplishment of this work was to show that an autofocus function based on variance measures produced the best results for tuberculosis images.
BackgroundOutlining lesion contours in Ultra Sound (US) breast images is an important step in breast cancer diagnosis. Malignant lesions infiltrate the surrounding tissue, generating irregular contours, with spiculation and angulated margins, whereas benign lesions produce contours with a smooth outline and elliptical shape. In breast imaging, the majority of the existing publications in the literature focus on using Convolutional Neural Networks (CNNs) for segmentation and classification of lesions in mammographic images. In this study our main objective is to assess the ability of CNNs in detecting contour irregularities in breast lesions in US images.MethodsIn this study we compare the performance of two CNNs with Direct Acyclic Graph (DAG) architecture and one CNN with a series architecture for breast lesion segmentation in US images. DAG and series architectures are both feedforward networks. The difference is that a DAG architecture could have more than one path between the first layer and end layer, whereas a series architecture has only one path from the beginning layer to the end layer. The CNN architectures were evaluated with two datasets.ResultsWith the more complex DAG architecture, the following mean values were obtained for the metrics used to evaluate the segmented contours: global accuracy: 0.956; IOU: 0.876; F measure: 68.77%; Dice coefficient: 0.892.ConclusionThe CNN DAG architecture shows the best metric values used for quantitatively evaluating the segmented contours compared with the gold-standard contours. The segmented contours obtained with this architecture also have more details and irregularities, like the gold-standard contours.
In recent years, convolutional neural networks (CNNs) have found many applications in medical image analysis. Having enough labeled data, CNNs could be trained to learn image features and used for object localization, classification, and segmentation. Although there are many interests in building and improving automated systems for medical image analysis, lack of reliable and publicly available biomedical datasets makes such a task difficult. In this work, the effectiveness of CNNs for the classification of breast lesions in ultrasound (US) images will be studied. First, due to a limited number of training data, we use a custom-built CNN with a few hidden layers and apply regularization techniques to improve the performance. Second, we use transfer learning and adapt some pre-trained models for our dataset. The dataset used in this work consists of a limited number of cases, 641 in total, histopathologically categorized (413 benign and 228 malignant lesions). To assess how the results of our classifier generalize on our data set, a 5-fold crossvalidation were employed, where in each fold 80% of data were used for training and the 20% for testing. Accuracy and the area under the ROC curve (AUC) were used as the main performance metrics. Before applying any regularizations techniques, we achieved an overall accuracy of 85.98% for tumor classification, and the AUC equal to 0.94. After applying image augmentation and regularization, the accuracy and the AUC increased to 92.05% and 0.97, respectively. Using a pre-trained model, we achieved an overall accuracy of 87.07% and an AUC equal to 0.96. The obtained results demonstrated the effectiveness of our custom architecture for classification of tumors in this small US imaging dataset, surpassing some traditional learning algorithm based on manual feature selection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.