2020
DOI: 10.1364/oe.399624
|View full text |Cite
|
Sign up to set email alerts
|

DeepCGH: 3D computer-generated holography using deep learning

Abstract: The goal of computer-generated holography (CGH) is to synthesize custom illumination patterns by modulating a coherent light beam. CGH algorithms typically rely on iterative optimization with a built-in trade-off between computation speed and hologram accuracy that limits performance in advanced applications such as optogenetic photostimulation. We introduce a non-iterative algorithm, DeepCGH, that relies on a convolutional neural network with unsupervised learning to compute accurate holograms with fixed comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
46
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 162 publications
(58 citation statements)
references
References 26 publications
0
46
0
Order By: Relevance
“…Although recent work successfully reduces the CGH computation time to milliseconds using a pre-trained deep neural network 35 , the computation time could be very long when the neural networks become larger as the number of z-planes increases. The extra computational requirements for generating holographic patterns in the Fourier domain (rather than directly projecting patterns on the conjugate image plane) can become limiting when thousands of different patterns are required (as in high throughput mapping experiments), or when fast online synthesis of custom patterns is needed for closed-loop experiments.…”
Section: Introductionmentioning
confidence: 99%
“…Although recent work successfully reduces the CGH computation time to milliseconds using a pre-trained deep neural network 35 , the computation time could be very long when the neural networks become larger as the number of z-planes increases. The extra computational requirements for generating holographic patterns in the Fourier domain (rather than directly projecting patterns on the conjugate image plane) can become limiting when thousands of different patterns are required (as in high throughput mapping experiments), or when fast online synthesis of custom patterns is needed for closed-loop experiments.…”
Section: Introductionmentioning
confidence: 99%
“…In the case of torus, Fig. 2-c shows the distribution of accuracy (ACC) [25], indicating the similarity between the ground truth depth map and the depth map estimated from HDD or CDD, where the x-axis represents the angular degree corresponding to viewpoint (see Fig. 1).…”
Section: Quantitative Resultsmentioning
confidence: 99%
“…Looking forward, the dissemination of the all-optical interrogation approach will crucially depend on the continuous development of more powerful hardware (Mardinly et al, 2018;Marshel et al, 2019), more intuitive software (this paper and Russell et al, 2019), more accurate optical algorithms (Eybposh et al, 2020) and more sensitive opsins (Mardinly et al, 2018;Marshel et al, 2019) and indicators (Dana et al, 2019).…”
Section: Anticipated Resultsmentioning
confidence: 99%
“…Looking forward, the dissemination of the all-optical interrogation approach will crucially depend on the continuous development of more powerful hardware (Mardinly et al ., 2018; Marshel et al ., 2019), more intuitive software (this paper and Russell et al ., 2019), more accurate optical algorithms (Eybposh et al ., 2020) and more sensitive opsins (Mardinly et al ., 2018; Marshel et al ., 2019) and indicators (Dana et al ., 2019). Beyond these specific avenues for improvement, all-optical interrogation will also benefit from ongoing work to increase our ability to image deep in cortical tissue, through the use of three-photon imaging (Horton et al ., 2013; Ouzounov et al ., 2017; Wang et al ., 2018; Weisenburger et al ., 2019; Yildirim et al ., 2019), red-shifted indicators (Zhao et al ., 2011; Inoue et al ., 2015; Dana et al ., 2016), adaptive optics (Wang et al ., 2015; Sun et al ., 2016) and GRIN lenses (Levene et al ., 2004; Jennings et al ., 2019), as well as approaches which allow us to image more neurons (Tsai et al ., 2015; Pachitariu et al ., 2016; Sofroniew et al ., 2016; Stirman et al ., 2016; Demas et al ., 2021) at faster rates (Lu et al ., 2017; Kazemipour et al ., 2018; Zhang et al ., 2019; Wu et al ., 2020).…”
Section: Anticipated Resultsmentioning
confidence: 99%
See 1 more Smart Citation