2019
DOI: 10.1038/s41592-019-0622-5
|View full text |Cite
|
Sign up to set email alerts
|

Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning

Abstract: Three-dimensional (3D) fluorescence microscopy in general requires axial scanning to capture images of a sample at different planes. Here we demonstrate that a deep convolutional neural network can be trained to virtually refocus a 2D fluorescence image onto user-defined 3D surfaces within the sample volume. With this data-driven computational microscopy framework, we imaged the neuron activity of a Caenorhabditis elegans worm in 3D using a time-sequence of fluorescence images acquired at a single focal plane,… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
162
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 212 publications
(164 citation statements)
references
References 58 publications
1
162
0
1
Order By: Relevance
“…Morphology has long been a cue for cell biologists and pathologists to recognize cell type and abnormalities related to disease (Bakal et al, 2007;Chan, 2014;Eddy et al, 2018;Gordonov et al, 2015;Gurcan et al, 2009;López, 2013;Pavillon et al, 2018;Wu et al, 2020;Yin et al, 2013). In this study, we rely on the exquisite sensitivity of deep learned artificial neural networks in recognizing subtle but systematic image patterns to classify different cell types and cell states.…”
Section: Visually Unstructured Properties Of Cell Image Appearance Enmentioning
confidence: 99%
See 2 more Smart Citations
“…Morphology has long been a cue for cell biologists and pathologists to recognize cell type and abnormalities related to disease (Bakal et al, 2007;Chan, 2014;Eddy et al, 2018;Gordonov et al, 2015;Gurcan et al, 2009;López, 2013;Pavillon et al, 2018;Wu et al, 2020;Yin et al, 2013). In this study, we rely on the exquisite sensitivity of deep learned artificial neural networks in recognizing subtle but systematic image patterns to classify different cell types and cell states.…”
Section: Visually Unstructured Properties Of Cell Image Appearance Enmentioning
confidence: 99%
“…However, the often cited weakness of these techniques is the lack of an intuitive explanation of which parts of the data are particularly meaningful in defining the extracted pattern. While in some applications, such as image segmentation, image restoration or mapping between imaging modalities, a well-validated outcome of a network has been satisfactory (Christiansen et al, 2018;Fang et al, 2019b;Guo et al, 2019;Hershko et al, 2019;Hollandi et al, 2019;LaChance and Cohen, 2020;Moen et al, 2019;Nehme et al, 2018;Ounkomol et al, 2018;Ouyang et al, 2018;Rivenson et al, 2019;Wang et al, 2019;Weigert et al, 2018;Wu et al, 2019), there is increasing mistrust in results produced by 'black-box' neural networks. Aside from increasing the confidence, the analysis of the properties -also referred to as 'mechanisms'of the pattern recognition process can potentially generate insight of a biological/physical phenomenon that escapes the analysis driven by human intuition.…”
Section: Interpretation Of Latent Features Discriminating High and Lomentioning
confidence: 99%
See 1 more Smart Citation
“…While we have concentrated here on label-free samples, our method has the potential to be adapted to fluorescently-labelled samples and to be combined with state-of-art computational techniques, such as deep learning. For instance, a deep-learning method for deducing zpositions from fluorescence microscopy images was recently described 44 . Our method could incorporate similar computational techniques to make use of the information provided by multiple planes for extending the z-range and enhancing the precision in fluorescence microscopy.…”
Section: Discussionmentioning
confidence: 99%
“…Convolutional neural network (CNN) and deep learning approaches have been proposed for several optical applications. Examples include virtual staining of non-stained samples [33], increasing spatial resolution in a large field of view in optical microscopy [34,35], color holographic microscopy with CNN [36], autofocusing and enhancing the depth-of-filed in inline holography [37], lens-less computational imaging by deep learning [38], single-cell-based reconstruction distance estimation by a regression CNN model [39], super-resolution fringe patterns by deep learning holography [40], virtual refocusing in fluorescence microscopy to map 2D images to a 3D surface [41], and several other studies [42][43][44]. Deep-learning based phase recovery by residual CNN model was also suggested [45], but the application is limited because the reference noise-free phase images for deep-learning model are generated by the multi-height phase retrieval approach (8 holograms are recorded at different sample-to-sensor distances).…”
Section: Proposed Deep Learning Model For Phase Recoverymentioning
confidence: 99%