Segmenting the nuclei of cells in microscopy images is often the first step in the quantitative analysis of imaging data for biological and biomedical applications. Many bioimage analysis tools can segment nuclei in images but need to be selected and configured for every experiment. The 2018 Data Science Bowl attracted 3,891 teams worldwide to make the first attempt to build a segmentation method that could be applied to any two-dimensional light microscopy image of stained nuclei across experiments, with no human interaction. Top participants in the challenge succeeded in this task, developing deep-learning-based models that identified cell nuclei across many image types and experimental conditions without the need to manually adjust segmentation parameters. This represents an important step toward configuration-free bioimage analysis software tools.
Morphological profiling aims to create signatures of genes, chemicals and diseases from microscopy images. Current approaches use classical computer vision-based segmentation and feature extraction. Deep learning models achieve state-of-the-art performance in many computer vision tasks such as classification and segmentation. We propose to transfer activation features of generic deep convolutional networks to extract features for morphological profiling. Our approach surpasses currently used methods in terms of accuracy and processing speed. Furthermore, it enables fully automated processing of microscopy images without need for single cell identification.
Identifying nuclei is often a critical first step in analyzing microscopy images of cells, and classical image processing algorithms are most commonly used for this task. Recent developments in deep learning can yield superior accuracy, but typical evaluation metrics for nucleus segmentation do not satisfactorily capture error modes that are relevant in cellular images. Besides, large image data sets with ground truth for evaluation have been limiting. We present an evaluation framework to measure accuracy, types of errors, and computational efficiency; and use it to compare two deep learning strategies (U-Net and DeepCell) alongside a classical approach implemented in CellProfiler. We publicly release a set of 23,165 manually annotated nuclei and source code to reproduce experiments. Our results show that U-Net outperforms both pixel-wise classification networks and classical algorithms. Also, our evaluation framework shows that deep learning improves accuracy and reduces the number of biologically relevant errors by half.
We present a new, carefully designed and well-annotated dataset of images and image-based profiles of cells that have been treated with chemical compounds and genetic perturbations. Each gene that is perturbed is a known target of at least two compounds in the dataset. The dataset can thus serve as a benchmark to evaluate methods for predicting similarities between compounds and between genes and compounds, measuring the effect size of a perturbation, and more generally, learning effective representations for measuring cellular state from microscopy images. Advancements in these applications can accelerate the development of new medicines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.