2020
DOI: 10.1007/978-3-030-18338-7_19
|View full text |Cite
|
Sign up to set email alerts
|

The Memory Challenge in Ultra-Low Power Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…The deployment of DL-based algorithms on the IoT demands aggressive hardware, software, and algorithmic co-optimization to exploit the scarce resources on these systems to the maximum degree [6]. In particular, the scarce availability of memory constitutes a real Deep Learning Memory Wall [7]: a fundamental limitation to the maximum performance of an embedded DNN compute system.…”
Section: Introductionmentioning
confidence: 99%
“…The deployment of DL-based algorithms on the IoT demands aggressive hardware, software, and algorithmic co-optimization to exploit the scarce resources on these systems to the maximum degree [6]. In particular, the scarce availability of memory constitutes a real Deep Learning Memory Wall [7]: a fundamental limitation to the maximum performance of an embedded DNN compute system.…”
Section: Introductionmentioning
confidence: 99%
“…There are many reasons for moving computing closer to where sensors gather data, including reliability, limiting network bandwidth, providing improved security, or more effective resource management of deep learning applications. This challenge is gaining research traction, for example, in [4], authors discuss the deep learning memory challenge caused by scarce availability of memory. They propose a combination of techniques for deployment of next-generation of on-chip machine learning including hardware-aware DNNs that could allow for this stage to be on-chip as opposed to offline.…”
Section: Introductionmentioning
confidence: 99%