Resistive Random Access Memory (RRAM) and Phase Change Memory (PCM) devices have been popularly used as synapses in crossbar array based analog Neural Network (NN) circuit to achieve more energy and time efficient data classification compared to conventional computers. Here we demonstrate the advantages of recently proposed spin orbit torque driven Domain Wall (DW) device as synapse compared to the RRAM and PCM devices with respect to on-chip learning (training in hardware) in such NN. Synaptic characteristic of DW synapse, obtained by us from micromagnetic modeling, turns out to be much more linear and symmetric (between positive and negative update) than that of RRAM and PCM synapse. This makes design of peripheral analog circuits for on-chip learning much easier in DW synapse based NN compared to that for RRAM and PCM synapses. We next incorporate the DW synapse as a Verilog-A model in the crossbar array based NN circuit we design on SPICE circuit simulator. Successful on-chip learning is demonstrated through SPICE simulations on the popular Fisher's Iris dataset. Time and energy required for learning turn out to be orders of magnitude lower for DW synapse based NN circuit compared to that for RRAM and PCM synapse based NN circuits.
On-chip learning in spin orbit torque driven domain wall synapse based crossbar fully connected neural network (FCNN) has been shown to be extremely efficient in terms of speed and energy, when compared to training on a conventional computing unit or even on a crossbar FCNN based on other non-volatile memory devices. However there are issues with respect to scalability of the on-chip learning scheme in the domain wall synapse based FCNN. Unless the scheme is scalable, it will not be competitive with respect to training a neural network on a conventional computing unit for real applications. In this paper, we have proposed a modification in the standard gradient descent algorithm, used for training such FCNN, by including appropriate thresholding units. This leads to optimization of the synapse cell at each intersection of the crossbars and makes the system scalable. In order for the system to approximate a wide range of functions for data classification, hidden layers must be present and the backpropagation algorithm (extension of gradient descent algorithm for multi-layered FCNN) for training must be implemented on hardware. We have carried this out in this paper by employing an extra crossbar. Through a combination of micromagnetic simulations and SPICE circuit simulations, we hence show highly improved accuracy for domain wall syanpse based FCNN with a hidden layer compared to that without a hidden layer for different machine learning datasets.
On-chip learning in a crossbar array based analog hardware Neural Network (NN) has been shown to have major advantages in terms of speed and energy compared to training NN on a traditional computer. However analog hardware NN proposals and implementations thus far have mostly involved Non Volatile Memory (NVM) devices like Resistive Random Access Memory (RRAM), Phase Change Memory (PCM), spintronic devices or floating gate transistors as synapses. Fabricating systems based on RRAM, PCM or spintronic devices need in-house laboratory facilities and cannot be done through merchant foundries, unlike conventional silicon based CMOS chips. Floating gate transistors need large voltage pulses for weight update, making on-chip learning in such systems energy inefficient. This paper proposes and implements through SPICE simulations on-chip learning in analog hardware NN using only conventional silicon based MOSFETs (without any floating gate) as synapses. We first model the synaptic characteristic of our single transistor synapse using SPICE circuit simulator and benchmark it against experimentally obtained current-voltage characteristics of a transistor. Next we design a Fully Connected Neural Network (FCNN) crossbar array using such transistor synapses. We also design analog peripheral circuits for neuron and synaptic weight update calculation, needed for on-chip learning, again using conventional transistors. Simulating the entire system on SPICE circuit simulator, we obtain high training and test accuracy on the standard Fisher's Iris dataset, widely used in machine learning. We also account for device variability and noise in the circuit, and show that our circuit still trains on the given dataset. We also compare the speed and energy performance of our transistor based implementation of analog hardware NN with some previous implementations of NN with NVM devices and show comparable performance with respect to on-chip learning. Easy method of fabrication makes hardware NN using our proposed conventional silicon MOSFET really attractive for future implementations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.