2016
DOI: 10.3389/fnins.2016.00333
|View full text |Cite
|
Sign up to set email alerts
|

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations

Abstract: In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc. Training of large DNNs, however, is universally considered as time consuming and computationally intensive task that demands datacenter-scale computational resources recruited for many days. Here we propose a concept of resistive processing unit (RPU) devices that can potentially accelerate DNN tr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

12
442
1
3

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 389 publications
(458 citation statements)
references
References 40 publications
12
442
1
3
Order By: Relevance
“…[3,[27][28][29][30] In the hardware implementation of FC networks, e.g., multi-layer perceptrons (MLPs), the weight matrices could be directly mapped to the conductance matrices of memristive crossbar arrays. [3,[27][28][29][30] In the hardware implementation of FC networks, e.g., multi-layer perceptrons (MLPs), the weight matrices could be directly mapped to the conductance matrices of memristive crossbar arrays.…”
Section: Cnns and Dnnsmentioning
confidence: 99%
See 2 more Smart Citations
“…[3,[27][28][29][30] In the hardware implementation of FC networks, e.g., multi-layer perceptrons (MLPs), the weight matrices could be directly mapped to the conductance matrices of memristive crossbar arrays. [3,[27][28][29][30] In the hardware implementation of FC networks, e.g., multi-layer perceptrons (MLPs), the weight matrices could be directly mapped to the conductance matrices of memristive crossbar arrays.…”
Section: Cnns and Dnnsmentioning
confidence: 99%
“…[3,30] The second approach is to use pulse trains continuously program the target device until the conductance change reaches the desired value before programming the next one (sequential update). For practical conductance update, mainly three approaches have been developed in literature.…”
Section: Artificial Synapsesmentioning
confidence: 99%
See 1 more Smart Citation
“…[19][20][21] The major issue for this approach is the non-linear weight update and the large variability of resistive switching devices. 20,22 On the other hand, brain-inspired spiking neural networks (SNNs) aim at replicating the brain structure and computation in hardware. Learning usually takes place via spike-timing dependent plasticity (STDP), [23][24][25][26][27] where synapses can update their weight according to the timing between spikes of the pre-synaptic neuron (PRE) and post-synaptic neuron (POST).…”
Section: -13mentioning
confidence: 99%
“…Each wafer allows emulating 200,000 neurons and 49 million synaptic connections [3]. As an analog cognitive neural network, there is a resistive processor of IBM [4], and others [5].…”
Section: Introductionmentioning
confidence: 99%