2018
DOI: 10.1038/s41598-018-30619-y
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification

Abstract: Convolutional neural networks (CNNs) excel in a wide variety of computer vision applications, but their high performance also comes at a high computational cost. Despite efforts to increase efficiency both algorithmically and with specialized hardware, it remains difficult to deploy CNNs in embedded systems due to tight power budgets. Here we explore a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
270
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 423 publications
(272 citation statements)
references
References 29 publications
1
270
0
1
Order By: Relevance
“…Deep Optics Deep learning can be used for jointly training camera optics and CNN-based estimation methods. This approach was recently demonstrated for applications in extended depth of field and superresolution imaging [39], image classification [2], and multicolor localization microscopy [25]. For example, Hershko et al [25] proposed to learn a custom diffractive phase mask that produced highly wavelength-dependent point spread functions (PSFs), allowing for color recovery from a grayscale camera.…”
Section: Computational Photography For Depth Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep Optics Deep learning can be used for jointly training camera optics and CNN-based estimation methods. This approach was recently demonstrated for applications in extended depth of field and superresolution imaging [39], image classification [2], and multicolor localization microscopy [25]. For example, Hershko et al [25] proposed to learn a custom diffractive phase mask that produced highly wavelength-dependent point spread functions (PSFs), allowing for color recovery from a grayscale camera.…”
Section: Computational Photography For Depth Estimationmentioning
confidence: 99%
“…Inspired by recent work on deep optics [2,39,12], we interpret the monocular depth estimation problem with coded defocus blur as an optical-encoder, electronic-decoder system that can be trained in an end-to-end manner. Although co-designing optics and image processing is a core idea in computational photography, only differentiable estimation algorithms, such as neural networks, allow for true end-toend computational camera designs.…”
Section: Introductionmentioning
confidence: 99%
“…There are several recent works that consider use of machine learning to jointly optimize hardware and software for imaging tasks [5,6,7,8,9,10,11]. These approaches aim to find a fixed set of optical parameters that are optimal for a particular task.…”
Section: Previous Workmentioning
confidence: 99%
“…However, one fundamental property of delay systems is their serial nature, not exploiting the potential parallelism offered by optical processes like diffraction. Deep feed-forward neural networks have been realized or discussed using diffraction FEMTO by complex phase modulations [13], [14], or even volume holograms [15]. We have recently demonstrated the creation of a spatio-temporal photonic reservoir using diffraction [16].…”
Section: Introductionmentioning
confidence: 99%