2021 58th ACM/IEEE Design Automation Conference (DAC) 2021
DOI: 10.1109/dac18074.2021.9586323
|View full text |Cite
|
Sign up to set email alerts
|

In-Hardware Learning of Multilayer Spiking Neural Networks on a Neuromorphic Processor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…In general, the performances of the artificial neural networks can be evaluated through their recognition capability for several types of pattern images. [68][69][70] To confirm the capability of the developed PVA-based memristor for constructing the complex neural network, we conducted the numerical simulation of SPICE for the handwritten digit pattern recognition based on a data set of the Modified National Institute of Standards and Technology (MNIST), [71,72] as shown in Figure 5e. In the simulation, 60 000 images for learning and 10 000 images for classifying tests were utilized, and the pixel of each image had 256 levels for grayscale.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In general, the performances of the artificial neural networks can be evaluated through their recognition capability for several types of pattern images. [68][69][70] To confirm the capability of the developed PVA-based memristor for constructing the complex neural network, we conducted the numerical simulation of SPICE for the handwritten digit pattern recognition based on a data set of the Modified National Institute of Standards and Technology (MNIST), [71,72] as shown in Figure 5e. In the simulation, 60 000 images for learning and 10 000 images for classifying tests were utilized, and the pixel of each image had 256 levels for grayscale.…”
Section: Resultsmentioning
confidence: 99%
“…In addition, when the current‐sensing resistor of 1 mΩ was utilized, the developed hardware neural networks classified digit images by consuming about 255 pJ, which is greatly more efficient than that of the von Neumann counterparts. [ 70 ] This implies that the developed PVA‐based memristor with biodegradability and mechanical flexibility can be used as a synaptic device in the energy efficient hardware neural networks with high integration density.…”
Section: Resultsmentioning
confidence: 99%
“…In general, the performances of the artificial neural networks can be evaluated through their recognition capability for several types of pattern images (Feng et al, 2021;Kim et al, 2021b;Shrestha et al, 2021). To confirm the capability of the developed PVA-based memristor for constructing the complex neural network, we conducted the numerical simulation of SPICE for the handwritten digit pattern recognition based on a dataset of the Modified National Institute of Standards and Technology (MNIST) Wang et al, 2020), as shown in Figure 5e.…”
Section: Resultsmentioning
confidence: 99%
“…In the neural network based on the developed memristor, the pattern recognition accuracy was about 92 % after training 50 epochs, which is highly close to that of the ideal software system (see Figure 5i). In addition, when the current-sensing resistor of 1 mΩ was utilized, the developed hardware neural networks classified digit images by consuming about 255 pJ, which is greatly more efficient than that of the von Neumann counterparts (Shrestha et al, 2021). This implies that the developed PVA-based memristor with biodegradability and mechanical flexibility can be used as a synaptic device in the energy efficient hardware neural networks with high integration density.…”
Section: Resultsmentioning
confidence: 99%
“…To our knowledge, this work is the first to show an SNN implementation of the backpropagation algorithm that is fully on-chip, without a computer in the loop. Other on-chip learning approaches so far either use feedback alignment [67], forward propagation of errors [32] or single layer training [25,27,29,68,69]. Compared to an equivalent implementation on a GPU, there is no loss in accuracy, but there are about two orders of magnitude power savings in the case of small batch sizes which are more realistic for edge computing settings.…”
Section: Significancementioning
confidence: 99%