2020 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2020
DOI: 10.23919/date48585.2020.9116263
|View full text |Cite
|
Sign up to set email alerts
|

DeepNVM: A Framework for Modeling and Analysis of Non-Volatile Memory Technologies for Deep Learning Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…Comprehensive framework (CF) for Given different operating conditions, configurations, supply voltage scaling, and aging processes, an SRAM cache's availability and reliability will degrade with time described [26]. Non-Volatile Memory Technology for last-level caches that use SRAM, STT-MRAM, and SOT-MRAM technologies reported [27].…”
Section: Background Studymentioning
confidence: 99%
“…Comprehensive framework (CF) for Given different operating conditions, configurations, supply voltage scaling, and aging processes, an SRAM cache's availability and reliability will degrade with time described [26]. Non-Volatile Memory Technology for last-level caches that use SRAM, STT-MRAM, and SOT-MRAM technologies reported [27].…”
Section: Background Studymentioning
confidence: 99%
“…During the inference, after passing the image through the layers of DL, activations are checked with the saved activations of a specific layer. If the activations of a particular layer for the current query match with the activations in the cache, further [ 167,168,169,187,188,189,190,191,192,193] propagation of the activations is stopped, and the cached result is returned as the prediction. This research was applied to a VGG-16 architecture using CIFAR and yielded a 1.96× latency gain using a CPU and a 1.54× increase when using a GPU with no loss in accuracy.…”
Section: B) Model Selectionmentioning
confidence: 99%
“…Workloads worsen the memory bottleneck that lowers the overall performance of systems where deep learning models are run since neural network workloads continue to have big memory footprints and significant computational requirements to attain improved accuracy (Inci 2022;Inci et al, 2022a). As deep learning models have become more proficient in many tasks, developing smaller neural network models in terms of trainable parameters has attracted interest from many researchers in the field (Inci et al, 2022b) to support operational needs in flood forecasting (Krajewski et al, 2021;Xiang and Demir, 2022) and inundation mapping (Hu and Demir, 2021;Li et al, 2022).…”
Section: Introductionmentioning
confidence: 99%