Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline [1]. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net [2], FCN [3], and Mask- RCNN [4] were popularly used, typically based on ResNet [5] or VGG [6] base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.
Positional information is a central concept in developmental biology. In developing organs, positional information can be idealized as a local coordinate system that arises from morphogen gradients controlled by organizers at key locations. This offers a plausible mechanism for the integration of the molecular networks operating in individual cells into the spatially-coordinated multicellular responses necessary for the organization of emergent forms. Understanding how positional cues guide morphogenesis requires the quantification of gene expression and growth dynamics in the context of their underlying coordinate systems. Here we present recent advances in the MorphoGraphX software (Barbier de Reuille et al., 2015) that implement a generalized framework to annotate developing organs with local coordinate systems. These coordinate systems introduce an organ-centric spatial context to microscopy data, allowing gene expression and growth to be quantified and compared in the context of the positional information thought to control them.
The quantitative analysis of cellular membranes helps understanding developmental processes at the cellular level. Particularly 3D microscopic image data offers valuable insights into cell dynamics, but error-free automatic segmentation remains challenging due to the huge amount of data generated and strong variations in image intensities. In this paper, we propose a new 3D segmentation approach which combines the discriminative power of convolutional neural networks (CNNs) for preprocessing and investigates the performance of three watershed-based postprocessing strategies (WS), which are well suited to segment object shapes, even when supplied with vague seed and boundary constraints. To leverage the full potential of the watershed algorithm, the multi-instance segmentation problem is initially interpreted as three-class semantic segmentation problem, which in turn is well-suited for the application of CNNs. Using manually annotated 3D confocal microscopy images of Arabidopsis thaliana, we show the superior performance of the proposed method compared to the state of the art.
Cell division, or mitosis, guarantees the accurate inheritance of the genomic information kept in the cell nucleus. Malfunctions in this process cause a threat to the health and life of the organism, including cancer and other manifold diseases. It is therefore crucial to study in detail the cell-cycle in general and mitosis in particular. Consequently, a large number of manual and semi-automated time-lapse microscopy image analyses of mitosis have been carried out in recent years. In this paper, we propose a method for automatic detection of cell-cycle stages using a recurrent neural network (RNN). An end-to-end model with center-cell focus tracker loss, and classification loss is trained. The evaluation was conducted on two time-series datasets, with 6-stages and 3-stages of cell splitting labeled. The frame-to-frame accuracy was calculated and precision, recall, and F1 Score were measured for each cell-cycle stage. We also visualized the learned feature space. Image reconstruction from the center-cell focus module was performed which shows that the network was able to focus on the center cell and classify it simultaneously. Our experiments validate the superior performance of the proposed network compared to a classifier baseline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.