There has been growing interest in using photonic processors for
performing neural network
inference operations; however, these networks are currently trained
using standard digital electronics. Here, we propose on-chip training
of neural networks enabled by a CMOS-compatible silicon
photonic architecture to
harness the potential for massively parallel, efficient, and fast data
operations. Our scheme employs the direct feedback alignment training
algorithm, which trains neural networks using error feedback rather
than error backpropagation, and can operate at speeds of trillions of
multiply–accumulate (MAC) operations per second while consuming less
than one picojoule per MAC operation. The photonic architecture
exploits parallelized matrix–vector multiplications using arrays of
microring resonators for processing multi-channel analog signals along
single waveguide buses to calculate the gradient vector for each
neural network layer in situ. We also
experimentally demonstrate training deep neural networks with the
MNIST dataset using on-chip MAC operation results. Our approach for
efficient, ultra-fast neural network training showcases photonics as a
promising platform for executing artificial intelligence
applications.
This paper documents and discusses the results of a study on the effects of oxidative and thermoreversible aging on the intermediate limiting phase angle temperatures for six Latin American asphalts. Materials were aged in the pressure aging vessel and investigated by infrared spectroscopy, modulated differential scanning calorimetry, and dynamic shear rheometry. It was found that the effects of irreversible/chemical and thermoreversible aging on rheological properties were of comparable weight. Hence, for future fatigue cracking performance grading, it is important to take the effects of both types of aging into consideration because the omission of thermoreversible effects will lead to inadequate control of cracking.
A synapse is a junction between two biological neurons, and the strength, or weight of the synapse, determines the communication strength between the neurons. Building a neuromorphic (i.e. neuron isomorphic) computing architecture, inspired by a biological network or brain, requires many engineered synapses. Furthermore, recent investigation in neuromorphic photonics, i.e. neuromorphic architectures on photonics platforms, have garnered much interest to enable high-bandwidth, low-latency, low-energy applications of neural networks in machine learning and neuromorphic computing. We propose a graphene-based synapse model as a core element to enable large-scale photonic neural networks based on on-chip multiwavelength techniques. This device consists of an electro-absorption modulator embedded in a microring resonator. We also introduce an encoding protocol that allows for the representation of synaptic weights on our photonic device with 15.7 bits of resolution using current control hardware. Recent work has suggested that graphene-based modulators could operate in excess of 100 GHz. Combined with our work, such a graphene-based synapse could enable applications for ultrafast and online learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.