2020
DOI: 10.1109/jstqe.2019.2957443
|View full text |Cite
|
Sign up to set email alerts
|

A Winograd-Based Integrated Photonics Accelerator for Convolutional Neural Networks

Abstract: Neural Networks (NNs) have become the mainstream technology in the artificial intelligence (AI) renaissance over the past decade. Among different types of neural networks, convolutional neural networks (CNNs) have been widely adopted as they have achieved leading results in many fields such as computer vision and speech recognition. This success in part is due to the widespread availability of capable underlying hardware platforms. Applications have always been a driving factor for design of such hardware arch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
19
0
2

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 52 publications
1
19
0
2
Order By: Relevance
“…This enables de-coupling the SNR at each layer from the final output result by, for instance, electronic signal restoration, with a possible drawback introduced being latency. Nonetheless, noise is not only a limiting factor, but bears also an opportunity for NN usages; for instance, training [57] an NN with noise and performing inference tasks while dropping the absolute accuracy by about 2-3% also makes the system more robust against physical noise since the NN was conditioned with noise 'stress' [34]. Incidentally, the small-kernel algorithms such as the Winograd transformation offer an interesting alternative to the FFT filtering approach, given that many CNNs are optimized for small kernel (<13 × 13) sizes.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This enables de-coupling the SNR at each layer from the final output result by, for instance, electronic signal restoration, with a possible drawback introduced being latency. Nonetheless, noise is not only a limiting factor, but bears also an opportunity for NN usages; for instance, training [57] an NN with noise and performing inference tasks while dropping the absolute accuracy by about 2-3% also makes the system more robust against physical noise since the NN was conditioned with noise 'stress' [34]. Incidentally, the small-kernel algorithms such as the Winograd transformation offer an interesting alternative to the FFT filtering approach, given that many CNNs are optimized for small kernel (<13 × 13) sizes.…”
Section: Resultsmentioning
confidence: 99%
“…Given that the above mentioned high-relevance of CNNs performing the convolution operations in the optical domain is of significant interest, next we turn to scaling laws of convolution processing. Since a convolution is a high dimensional ∼N 3 problem, parallelization strategies such as multiplexing is key, which hence is synergistic to optics and photonics [32][33][34]. An interesting inspiration can be borrowed from Fourier optics [9]; instead of performing the cumbersome (allto-all) convolution between the data and the kernel, a simpler dot-product multiplication can be performed in the in the Fourier domain instead.…”
Section: Introductionmentioning
confidence: 99%
“…1a). A multi-level (2-layer) perceptron feed-forward neural network based on these novel all-optical photonic neurons, is trained using a photonic hardware model 24…”
Section: Introductionmentioning
confidence: 99%
“…1a). A multi-level (2-layer) perceptron feed-forward neural network based on these novel all-optical photonic neurons, is trained using a photonic hardware model 24 to accurately simulate the complete process and learn the weights including analog noise. The network is ultimately emulated on an open-source machine learning framework.…”
Section: Introductionmentioning
confidence: 99%
“…During this time, advances in computer hardware have enabled the implementation of ideas developed in the field of artificial intelligence during the last century, leading to impressive demonstrations of the potential of artificial neural networks (ANNs) in machine learning (ML). These techniques have been adopted to address different problems into a variety of fields in the natural sciences, including for solving inverse problems in nanophotonics, [15,17,18] nanospectroscopy, [19] material sciences, [20,21] or microscopy. [22,23] Typically, supervised learning employing ANNs is used in two main procedures aimed at solving the inverse problem: classification [19,24] and regression.…”
Section: Introductionmentioning
confidence: 99%