2022
DOI: 10.1002/lpor.202200381
|View full text |Cite
|
Sign up to set email alerts
|

Device‐System End‐to‐End Design of Photonic Neuromorphic Processor Using Reinforcement Learning

Abstract: The incorporation of high-performance optoelectronic devices into photonic neuromorphic processors can substantially accelerate computationally intensive matrix multiplication operations in machine learning (ML) algorithms. However, the conventional designs of individual devices and system are largely disconnected, and the system optimization is limited to the manual exploration of a small design space. Here, a device-system end-to-end design methodology is reported to optimize a free-space optical general mat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 59 publications
0
3
0
Order By: Relevance
“…The MVM operations implemented using crossbar arrays can be extended to calculate general matrix–matrix multiplication (GEMM) through block matrix multiplications. [ 42 ] We model the error distributions of the MVM calculations in crossbar arrays by adding noise to the standard GEMM multiplication function, such as PyTorch matmul function. The noise is modeled as a random variable following a Cauchy distribution, whose parameters are fit so that the noisy multiplication function can generate the same calculation error distribution as the experimental measurement; see Figure a.…”
Section: Resultsmentioning
confidence: 99%
“…The MVM operations implemented using crossbar arrays can be extended to calculate general matrix–matrix multiplication (GEMM) through block matrix multiplications. [ 42 ] We model the error distributions of the MVM calculations in crossbar arrays by adding noise to the standard GEMM multiplication function, such as PyTorch matmul function. The noise is modeled as a random variable following a Cauchy distribution, whose parameters are fit so that the noisy multiplication function can generate the same calculation error distribution as the experimental measurement; see Figure a.…”
Section: Resultsmentioning
confidence: 99%
“…Training Parameters and Datasets -To evaluate the proposed RubikONNs architecture and RotAgg and RotSeq training algorithms, we select four public image classification datasets, including 1) MNIST-10 (MNIST) ([LeCun, 1998],) 2) Fashion-MNIST (FMNIST) ([Xiao et al, 2017]), 3) Kuzushiji-MNIST (KMNIST) ([Clanuwat et al, 2018]), and 4) Extension-MNIST-Letters (EMNIST) ([Cohen et al, 2017]), an extension of MNIST to handwritten letters. Specifically, for EMNIST, we customize the dataset to have the first ten classes (i.e., A-J) to match the D 2 NN physical system, with 48000 training examples and 8000 testing examples.…”
Section: Resultsmentioning
confidence: 99%
“…Diffractive Deep Neural Networks (D 2 NN) -Recently, there are increasing efforts on optical neural networks and optical computing based DNNs hardware, which bring significant advantages for machine learning systems in terms of their power efficiency, parallelism, and computational speed, demonstrated at various optical computing systems by [Mengu et al, 2020;Lin et al, 2018;Feldmann et al, 2019;Shen et al, 2017;Tait et al, 2017;Rahman et al, 2020;Li et al, 2021;Tang et al, 2023;Lou and et al, 2023]. Among them, free-space diffractive deep neural networks (D 2 NNs) , which is based on the light diffraction and phase modulation of the light signal provided by diffractive layers (L1-L5 in Figure 1), featuring millions of neurons in each layer interconnected with neurons in neighboring layers.…”
Section: Introductionmentioning
confidence: 99%