Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline [1]. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net [2], FCN [3], and Mask- RCNN [4] were popularly used, typically based on ResNet [5] or VGG [6] base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.
Pathologic diagnosis of nasopharyngeal carcinoma (NPC) can be challenging since most cases are nonkeratinizing carcinoma with little differentiation and many admixed lymphocytes. Our aim was to evaluate the possibility to identify NPC in nasopharyngeal biopsies using deep learning. A total of 726 nasopharyngeal biopsies were included. Among them, 100 cases were randomly selected as the testing set, 20 cases as the validation set, and all other 606 cases as the training set. All three datasets had equal numbers of NPC cases and benign cases. Manual annotation was performed. Cropped square image patches of 256 × 256 pixels were used for patch-level training, validation, and testing. The final patch-level algorithm effectively identified NPC patches, with an area under the receiver operator characteristic curve (AUC) of 0.9900. Using gradient-weighted class activation mapping, we demonstrated that the identification of NPC patches was based on morphologic features of tumor cells. At the second stage, whole-slide images were sequentially cropped into patches, inferred with the patch-level algorithm, and reconstructed into images with a smaller size for training, validation, and testing. Finally, the AUC was 0.9848 for slide-level identification of NPC. Our result shows for the first time that deep learning algorithms can identify NPC.
Detection of nodal micrometastasis (tumor size: 0.2–2.0 mm) is challenging for pathologists due to the small size of metastatic foci. Since lymph nodes with micrometastasis are counted as positive nodes, detecting micrometastasis is crucial for accurate pathologic staging of colorectal cancer. Previously, deep learning algorithms developed with manually annotated images performed well in identifying micrometastasis of breast cancer in sentinel lymph nodes. However, the process of manual annotation is labor intensive and time consuming. Multiple instance learning was later used to identify metastatic breast cancer without manual annotation, but its performance appears worse in detecting micrometastasis. Here, we developed a deep learning model using whole-slide images of regional lymph nodes of colorectal cancer with only a slide-level label (either a positive or negative slide). The training, validation, and testing sets included 1963, 219, and 1000 slides, respectively. A supercomputer TAIWANIA 2 was used to train a deep learning model to identify metastasis. At slide level, our algorithm performed well in identifying both macrometastasis (tumor size > 2.0 mm) and micrometastasis with an area under the receiver operating characteristics curve (AUC) of 0.9993 and 0.9956, respectively. Since most of our slides had more than one lymph node, we then tested the performance of our algorithm on 538 single-lymph node images randomly cropped from the testing set. At single-lymph node level, our algorithm maintained good performance in identifying macrometastasis and micrometastasis with an AUC of 0.9944 and 0.9476, respectively. Visualization using class activation mapping confirmed that our model identified nodal metastasis based on areas of tumor cells. Our results demonstrate for the first time that micrometastasis could be detected by deep learning on whole-slide images without manual annotation.
In esophageal cancer, few prediction tools can be confidently used in current clinical practice. We developed a deep convolutional neural network (CNN) with 798 positron emission tomography (PET) scans of esophageal squamous cell carcinoma and 309 PET scans of stage I lung cancer. In the first stage, we pretrained a 3D-CNN with all PET scans for a task to classify the scans into esophageal cancer or lung cancer. Overall, 548 of 798 PET scans of esophageal cancer patients were included in the second stage with an aim to classify patients who expired within or survived more than one year after diagnosis. The area under the receiver operating characteristic curve (AUC) was used to evaluate model performance. In the pretrain model, the deep CNN attained an AUC of 0.738 in identifying patients who expired within one year after diagnosis. In the survival analysis, patients who were predicted to be expired but were alive at one year after diagnosis had a 5-year survival rate of 32.6%, which was significantly worse than the 5-year survival rate of the patients who were predicted to survive and were alive at one year after diagnosis (50.5%, p < 0.001). These results suggest that the prediction model could identify tumors with more aggressive behavior. In the multivariable analysis, the prediction result remained an independent prognostic factor (hazard ratio: 2.830; 95% confidence interval: 2.252–3.555, p < 0.001). We conclude that a 3D-CNN can be trained with PET image datasets to predict esophageal cancer outcome with acceptable accuracy.
The aim of this study was to investigate the feasibility of using machine learning techniques based on morphological features in classifying two subtypes of primary intestinal T-cell lymphomas (PITLs) defined according to the WHO criteria: monomorphic epitheliotropic intestinal T-cell lymphoma (MEITL) versus intestinal T-cell lymphoma, not otherwise specified (ITCL-NOS), which is considered a major challenge for pathological diagnosis. A total of 40 histopathological whole-slide images (WSIs) from 40 surgically resected PITL cases were used as the dataset for model training and testing. A deep neural network was trained to detect and segment the nuclei of lymphocytes. Quantitative nuclear morphometrics were further computed from these predicted contours. A decision-tree-based machine learning algorithm, XGBoost, was then trained to classify PITL cases into two disease subtypes using these nuclear morphometric features. The deep neural network achieved an average precision of 0.881 in the cell segmentation work. In terms of classifying MEITL versus ITCL-NOS, the XGBoost model achieved an area under receiver operating characteristic curve (AUC) of 0.966. Our research demonstrated an accurate, human-interpretable approach to using machine learning algorithms for reducing the high dimensionality of image features and classifying T cell lymphomas that present challenges in morphologic diagnosis. The quantitative nuclear morphometric features may lead to further discoveries concerning the relationship between cellular phenotype and disease status.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.