Recent advances in neuromorphic computing have established a computational framework that removes the processor-memory bottleneck evident in traditional von Neumann computing. Moreover, contemporary photonic circuits have addressed the limitations of electrical computational platforms to offer energy-efficient and parallel interconnects independently of the distance. When employed as synaptic interconnects with reconfigurable photonic elements, they can offer an analog platform capable of arbitrary linear matrix operations, including multiply–accumulate operation and convolution at extremely high speed and energy efficiency. Both all-optical and optoelectronic nonlinear transfer functions have been investigated for realizing neurons with photonic signals. A number of research efforts have reported orders of magnitude improvements estimated for computational throughput and energy efficiency. Compared to biological neural systems, achieving high scalability and density is challenging for such photonic neuromorphic systems. Recently developed tensor-train-decomposition methods and three-dimensional photonic integration technologies can potentially address both algorithmic and architectural scalability. This tutorial covers architectures, technologies, learning algorithms, and benchmarking for photonic and optoelectronic neuromorphic computers.
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing. Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm. Furthermore, the co-integration of CMOS and photonic elements combineslow-loss photonic devices with analog electronics for greater flexibility of nonlinear computational elements. We designed and simulated an optoelectronic spiking neuron circuit on a monolithic silicon photonics (SiPh) process that replicates useful spiking behaviors beyond the leaky integrate-and-fire (LIF). Additionally, we explored two learning algorithms with the potential for on-chip learning using Mach-Zehnder Interferometric (MZI) meshes as synaptic interconnects. A variation of Random Backpropagation (RPB) was experimentally demonstrated on-chip and matched the performance of a standard linear regression on a simple classification task. In addition, we applied the Contrastive Hebbian Learning (CHL) rule to a simulated neural network composed of MZI meshes for a random input-output mapping task. The CHL-trained MZI network performed better than random guessing but did not match the performance of the ideal neural network (without the constraints imposed by the MZI meshes). Through these efforts, we demonstrate that co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.