2018 IEEE International Solid - State Circuits Conference - (ISSCC) 2018
DOI: 10.1109/isscc.2018.8310398
|View full text |Cite
|
Sign up to set email alerts
|

A 42pJ/decision 3.12TOPS/W robust in-memory machine learning classifier with on-chip training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 136 publications
(58 citation statements)
references
References 3 publications
0
58
0
Order By: Relevance
“…In another example, the TrueNorth platform (Cassidy et al, 2013 ) is built upon multiple 256 × 256 1-bit synaptic weight crossbars, although it includes extra circuitry to allow assigning up to four possible 8-bit values to the synapses (with some restrictions). In the world of non-spiking Deep Neural Networks (DNN), where there is now a strong quest for providing dedicated efficient hardware (Chen et al, 2016 ; Sim et al, 2016 ; Bong et al, 2017 ; Whatmough et al, 2017 ; Biswas and Chandrakasan, 2018 ; Gonugondla et al, 2018 ; Khwa et al, 2018 ), some theorists are studying ways to reduce bit precision of the weights down to 1-bit (Courbariaux et al, 2015 ; Rastegari et al, 2016 ) to help simplifying hardware. Here we focus on spiking neural network (SNN) hardware capable of on-line unsupervised learning through Spike-Time-Dependent-Plasticity (STDP).…”
Section: Introductionmentioning
confidence: 99%
“…In another example, the TrueNorth platform (Cassidy et al, 2013 ) is built upon multiple 256 × 256 1-bit synaptic weight crossbars, although it includes extra circuitry to allow assigning up to four possible 8-bit values to the synapses (with some restrictions). In the world of non-spiking Deep Neural Networks (DNN), where there is now a strong quest for providing dedicated efficient hardware (Chen et al, 2016 ; Sim et al, 2016 ; Bong et al, 2017 ; Whatmough et al, 2017 ; Biswas and Chandrakasan, 2018 ; Gonugondla et al, 2018 ; Khwa et al, 2018 ), some theorists are studying ways to reduce bit precision of the weights down to 1-bit (Courbariaux et al, 2015 ; Rastegari et al, 2016 ) to help simplifying hardware. Here we focus on spiking neural network (SNN) hardware capable of on-line unsupervised learning through Spike-Time-Dependent-Plasticity (STDP).…”
Section: Introductionmentioning
confidence: 99%
“…A method called WAGE has been developed to train DNNs with low bitwidth integers at all stages, including gradients and backpropagated errors (Wu et al, 2018 ). Very recently, training methods were also ported onto specific hardware systems: Gonugondla et al ( 2018 ) present a deep in-memory architecture with on-chip training, which is primarily useful to compensate for PVT variations of the analog circuits.…”
Section: Discussionmentioning
confidence: 99%
“…6) we can observe that it has a significant spread from its mean value (σ ≈ 30%µ). Now, when I cell is used to modulate the analog voltage (V a ) on the bit-line [13]- [15], [17], there is a wide variation in the V a value and it cannot be controlled very well. This compromises the computation accuracy and extra algorithmic techniques might be required to compensate for that.…”
Section: Overall Architecturementioning
confidence: 99%
“…However, this would lead to an increase in the number of computations and the energy required. [15] proposed an on-chip training to compensate for chip-to-chip variations. However, this would incur energy and timing penalty required to re-train the network corresponding to every single chip.…”
Section: Overall Architecturementioning
confidence: 99%