High-throughput microscopy has outpaced analysis; biomarker-optimized CNNs are a generalizable, fast, and interpretable solution.
Cell death is an essential process in biology that must be accounted for in live microscopy experiments. Nevertheless, cell death is difficult to detect without perturbing experiments with stains, dyes or biosensors that can bias experimental outcomes, lead to inconsistent results, and reduce the number of processes that can be simultaneously labelled. These additional steps also make live microscopy difficult to scale for high-throughput screening because of the cost, labor, and analysis they entail. We address this fundamental limitation of live microscopy with biomarker-optimized convolutional neural networks (BO-CNN): computer vision models trained with a ground truth biosensor that detect live cells with superhuman, 96% accuracy more than 100 times faster than previous methods. Our models learn to identify important morphological characteristics associated with cell vitality without human input or additional perturbations, and to generalize to other imaging modalities and cell types for which they have no specialized training. We demonstrate that we can interpret decisions from BO-CNN models to gain biological insight into the patterns they use to achieve superhuman accuracy. The BO-CNN approach is broadly useful for live microscopy, and affords a powerful new paradigm for advancing the state of high-throughput imaging in a variety of contexts.
Neurotoxicity can be detected in live microscopy by morphological changes such as retraction of neurites, fragmentation, blebbing of the neuronal soma and ultimately the disappearance of fluorescently labeled neurons. However, quantification of these features is often difficult, low-throughput, and imprecise due to the overreliance on human curation. Recently, we showed that convolutional neural network (CNN) models can outperform human curators in the assessment of neuronal death from images of fluorescently labeled neurons, suggesting that there is information within the images that indicates toxicity but that is not apparent to the human eye. In particular, the CNN’s decision strategy indicated that information within the nuclear region was essential for its superhuman performance. Here, we systematically tested this prediction by comparing images of fluorescent neuronal morphology from nuclear-localized fluorescent protein to those from freely diffused fluorescent protein for classifying neuronal death. We found that biomarker-optimized (BO-) CNNs could learn to classify neuronal death from fluorescent protein-localized nuclear morphology (mApple-NLS-CNN) alone, with super-human accuracy. Furthermore, leveraging methods from explainable artificial intelligence, we identified novel features within the nuclear-localized fluorescent protein signal that were indicative of neuronal death. Our findings suggest that the use of a nuclear morphology marker in live imaging combined with computational models such mApple-NLS-CNN can provide an optimal readout of neuronal death, a common result of neurotoxicity.
Live-cell imaging is an important technique to study cell migration and proliferation as well as image-based profiling of drug perturbations over time. To gain biological insights from live-cell imaging data, it is necessary to identify individual cells, follow them over time and extract quantitative information. However, since often biological experiment does not allow the high temporal resolution to reduce excessive levels of illumination or minimize unnecessary oversampling to monitor long-term dynamics, it is still a challenging task to obtain good tracking results with coarsely sampled imaging data. To address this problem, we consider cell tracking problem as "stable matching problem" and propose a robust tracking method based on Voronoi partition which adapts parameters that need to be set according to the spatio-temporal characteristics of live cell imaging data such as cell population and migration. We demonstrate the performance improvement provided by the proposed method using numerical simulations and compare its performance with proximity-based tracking and nearest neighbor-based tracking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.