2021
DOI: 10.1016/j.snb.2021.130638
|View full text |Cite
|
Sign up to set email alerts
|

High-speed large-scale 4D activities mapping of moving C. elegans by deep-learning-enabled light-field microscopy on a chip

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 48 publications
0
11
0
Order By: Relevance
“…We also anticipate more combinations of DL algorithms and high-content analysis. A “self-learning microscope” system is expected, which would combine automation, high speed, high throughput, and high repeatability, and could help biologists collect a large amount of living cell data and extract reliable and accurate scientific results [ 25 , 124 ]. Another shortcoming of organoid systems is the lack of communication between tissues.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We also anticipate more combinations of DL algorithms and high-content analysis. A “self-learning microscope” system is expected, which would combine automation, high speed, high throughput, and high repeatability, and could help biologists collect a large amount of living cell data and extract reliable and accurate scientific results [ 25 , 124 ]. Another shortcoming of organoid systems is the lack of communication between tissues.…”
Section: Discussionmentioning
confidence: 99%
“…The traditional method does not offer a good solution [ 121 123 ]. Therefore, Zhu et al [ 124 ] proposed a new fusion of microfluidics and light-field microscopes to achieve high-speed four-dimensional (4D, space–time) imaging of moving nematodes on a chip. The combination of light-field microscopy (LFM) that supports DL and chip-based sample manipulation can continuously record the 3D instantaneous position of nematodes and screen a large number of worms on a high-throughput chip (Fig.…”
Section: Deep Learning In Organoid Images and Potential Integrationsmentioning
confidence: 99%
See 1 more Smart Citation
“… 4 https://imagej.nih.gov/ij/` LFDisplay The Board of Trustees of The Leland Stanford Junior University http://graphics.stanford.edu/software/LFDisplay/ Other Confocal microscope Olympus FV3000 Leica SP8-STED/FLIM/FCS Commercial fluorescence microscope Olympus BX51 Objective Olympus LUMPlanFLN ×40/NA 0.8 water Nikon Fluor ×20/0.5 water Olympus UPLSAPO 40X2 ×40/NA 0.95 Leica HC PL APO CS2 ×20/NA 0.75 Oil Relay system Nikon AF 60 mm 2.8D Relay lens Thorlabs AC508-080-A Camera Hamamatsu Flash 4.0 V2 Microlens array OKO Optics APO-Q-P150-F3.5 (633) Objective scanner Physik Instrumente (PI) P-725.4CD Mirror Thorlabs PF20-03-P01 Optomechanical components Thorlabs KCB2C LCP01T AC508-080 CH1060 RS6P Microfluidic chamber Zhu et al. 5 N/A …”
Section: Key Resources Tablementioning
confidence: 99%
“…The incorporating of deep neural networks enlarges this design space of microscopy by introducing prior knowledge of high-resolution data 20,21 . Previous view-channel-depth light-field microscopy (VCD-LFM) is capable of directly reconstructing high-resolution 3D volume from 2D light-field (LF) raw data by splitting various views from LF and incorporating successive extracted features in network into multiple channels to yield 3D image stacks, successfully pushing the spatial resolution of LFM to diffraction limit and showing outstanding performance in imaging cellular structures [22][23][24] . However, learning the mapping function to reconstruct from 2D under-sampling LF to 3D super-resolution (SR) volumes is challenging for usual one-stage model, since the degradation model of light field imaging is an extremely intricate process coupling multiple resolution degradation and noise and compressing the space bandwidth product by ~500 times, resulting in the limited performance of reaching sub-diffraction-limited resolution with high fidelity.…”
Section: Introductionmentioning
confidence: 99%