2018
DOI: 10.20944/preprints201807.0362.v2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Neuromorphic Learning Machines using Emerging Memory Devices with Brain-like Energy Efficiency

Abstract: The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e. on hand-held devices that are energy constrained, which is energy prohibitive when employing… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 43 publications
0
2
0
Order By: Relevance
“…More detailed studies of the energy consumption of the neural system in the brain have been put forward, however, estimates end up in a similar range. 56 Exactly how the brain spends this energy is a matter of debate, but it has been estimated that around 70% is used for interneuron communication. 57 Using CMOS solutions particularly optimized toward neural networks, efficiencies in the range of 10 –11 J per operation have been achieved.…”
Section: Discussionmentioning
confidence: 99%
“…More detailed studies of the energy consumption of the neural system in the brain have been put forward, however, estimates end up in a similar range. 56 Exactly how the brain spends this energy is a matter of debate, but it has been estimated that around 70% is used for interneuron communication. 57 Using CMOS solutions particularly optimized toward neural networks, efficiencies in the range of 10 –11 J per operation have been achieved.…”
Section: Discussionmentioning
confidence: 99%
“…So an in-memory computing system like a crossbar array of analog synaptic devices has been considered as a faster and energyefficient alternative to traditional computing systems, with memory-computing separation, for executing these algorithms [2]- [7]. While such a crossbar array can be used for inference (forward computation with pre-trained NN), recently the option of training the NN in the array itself has also been explored -also known as on-chip learning -owing to the advantages it offers for edge devices [4], [8], [9].…”
Section: Introductionmentioning
confidence: 99%