2022
DOI: 10.1145/3520130
|View full text |Cite
|
Sign up to set email alerts
|

Energy-efficient and Reliable Inference in Nonvolatile Memory under Extreme Operating Conditions

Abstract: Beyond edge devices can operate outside the reach of the power grid and without batteries. Such devices can be deployed in large numbers in regions which are difficult to access. Using machine learning, these devices can solve complex problems and relay valuable information back to a host. Many such devices deployed in low earth orbit can even be used as nano-satellites. Due to the harsh and unpredictable nature of the environment, these devices must be highly energy efficient, be capable of operating intermit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 138 publications
(176 reference statements)
0
1
0
Order By: Relevance
“…As the main overhead of intermittent DNN inference is to checkpoint program states to NVM, particular processors are developed to reduce the checkpointing overhead or even remove the need for checkpointing and improve the energy efficiency of DNN inference. MOUSE [92], [93] is an in-memory accelerator specifically designed to enable DNN inference in the presence of power failures, the architecture of which is shown in Figure 10. First, MOUSE has on-chip non-volatile memory to ensure data will not be lost across power failures, specifically the STT-MRAM array, which is considered a universal memory replacement [99].…”
Section: G Hardware Design To Facilitate Intermittent Dnn Inferencementioning
confidence: 99%
“…As the main overhead of intermittent DNN inference is to checkpoint program states to NVM, particular processors are developed to reduce the checkpointing overhead or even remove the need for checkpointing and improve the energy efficiency of DNN inference. MOUSE [92], [93] is an in-memory accelerator specifically designed to enable DNN inference in the presence of power failures, the architecture of which is shown in Figure 10. First, MOUSE has on-chip non-volatile memory to ensure data will not be lost across power failures, specifically the STT-MRAM array, which is considered a universal memory replacement [99].…”
Section: G Hardware Design To Facilitate Intermittent Dnn Inferencementioning
confidence: 99%