Optical Fiber Communication Conference (OFC) 2021 2021
DOI: 10.1364/ofc.2021.th1a.22
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Coherent Integrated Photonic Neural Networks under Random Uncertainties

Abstract: We propose an optimization method to improve power efficiency and robustness in silicon-photonic-based coherent integrated photonic neural networks. Our method reduces the network power consumption by 15.3% and the accuracy loss under uncertainties by 16.1%.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…Several optical neural networks based on silicon photonic integrated circuits have been proposed, including coherent networks using MZIs [78], [79], [80], [81] and noncoherent networks using MRR banks [77], [82], [83], [84], [85], [86], [87], hence enhancing the hardware implementation of multiplication units with higher speed and lower computation energy consumption compared to electronic counterparts. Despite being beneficial compared to electronic accelerators, photonic AI accelerators suffer from inherent limitations, including optical loss and crosstalk noise [75], large footprint (≈1000 µm for a deep neural network with two hidden layers), and sensitivity to thermal and process variations, which result in deterioration of the system's overall performance (e.g., drop in inferencing accuracy) as the network scales up [88], [89], [90], and [91].…”
Section: Phase Change Materials For Photonic In-memory Computingmentioning
confidence: 99%
“…Several optical neural networks based on silicon photonic integrated circuits have been proposed, including coherent networks using MZIs [78], [79], [80], [81] and noncoherent networks using MRR banks [77], [82], [83], [84], [85], [86], [87], hence enhancing the hardware implementation of multiplication units with higher speed and lower computation energy consumption compared to electronic counterparts. Despite being beneficial compared to electronic accelerators, photonic AI accelerators suffer from inherent limitations, including optical loss and crosstalk noise [75], large footprint (≈1000 µm for a deep neural network with two hidden layers), and sensitivity to thermal and process variations, which result in deterioration of the system's overall performance (e.g., drop in inferencing accuracy) as the network scales up [88], [89], [90], and [91].…”
Section: Phase Change Materials For Photonic In-memory Computingmentioning
confidence: 99%
“…A method was presented in [6] to counter the impact of both FPVs and thermal effects using modified cost functions during training with added benefits of post-fabrication hardware calibration. The impact of FPVs can also be reduced by minimizing the tuned phase angles in an SPNN; this can be done by leveraging the non-uniqueness of SVD [13] or by pruning redundant phase angles [14], [15]. All these methods focus on mitigating deviations in SPNNs post-fabrication by either using thermal actuators or by post-fabrication training methods to compensate for any additionally introduced phase noise.…”
Section: Related Work On Fpv Analysis In Spnnsmentioning
confidence: 99%