2018
DOI: 10.1007/s00521-018-3659-y
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices

Abstract: In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron network, we evaluate … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
40
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 58 publications
(40 citation statements)
references
References 37 publications
0
40
0
Order By: Relevance
“…For high performance in HNNs, various synaptic arrays and neuron circuits suitable for efficient architectures and learning algorithms have been researched [4][5][6][7][8]. Specifically, both excitatory (G + ) and inhibitory (G -) synaptic arrays are important to improve the accuracy of HNNs [9][10][11]. Neuron circuits that use large capacitors (≥ 0.1 pF) and many transistors (≥ 11 MOSFETs) to process simultaneously signals from these two types of synapses have been reported [11][12][13], resulting in increased power consumption and a larger area.…”
Section: Introductionmentioning
confidence: 99%
“…For high performance in HNNs, various synaptic arrays and neuron circuits suitable for efficient architectures and learning algorithms have been researched [4][5][6][7][8]. Specifically, both excitatory (G + ) and inhibitory (G -) synaptic arrays are important to improve the accuracy of HNNs [9][10][11]. Neuron circuits that use large capacitors (≥ 0.1 pF) and many transistors (≥ 11 MOSFETs) to process simultaneously signals from these two types of synapses have been reported [11][12][13], resulting in increased power consumption and a larger area.…”
Section: Introductionmentioning
confidence: 99%
“…By using sign of W ij , we can update the conductance of synaptic devices. Weight (W ij ) of synaptic device can be modified by one step (| G − ij |, −| G + ij |) at each iteration according to sign of W ij to reduce the burden of periphery circuit [14].…”
Section: Operation Scheme Of Multi-layer Neural Networkmentioning
confidence: 99%
“…We also investigated the device variation by measuring NAND flash cells and checked the reliability of NAND flash cells by measuring endurance and retention characteristics. Using a matched computer simulation, a 3-layer perceptron network with 40,545 synapses is trained using the weight update method in [14] appropriate for our device and the MNIST data set.…”
Section: Introductionmentioning
confidence: 99%
“…The work in [36] experimentally shows the possibility to tune the memristive device within 1% accuracy degradation with respect to the desired state, within the dynamic range of the device. Furthermore, the usage of on-chip learning schemes will be helpful to account for these variations [50] Resistance variations can occur in the neuron memristor as well. However, this does not cause any significant read error at the output since the off to on resistance ratio is in the order of 10 4 − 10 7 [15] and the resistor divider circuit is capable of detecting this large drop with almost zero error (refer to section V).…”
Section: The Impact Of Synaptic Weight Variationsmentioning
confidence: 99%