2014
DOI: 10.1145/2644808
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Storage in Solid-State Memories

Abstract: Memories today expose an all-or-nothing correctness model that incurs significant costs in performance, energy, area, and design complexity. But not all applications need high-precision storage for all of their data structures all of the time. This paper proposes mechanisms that enable applications to store data approximately and shows that doing so can improve the performance, lifetime, or density of solid-state memories. We propose two mechanisms. The first allows errors in multi-level cells by reducing the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
94
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 127 publications
(94 citation statements)
references
References 46 publications
0
94
0
Order By: Relevance
“…In one proposal, Sampson et al [63] consider multilevel memories, such as multilevel phasechange memories [64], where the density is enhanced by storing more than one bit in a single memory cell. Multiple writes may be needed to ensure that the correct value is stored in the cell and hence a write latency cost and an energy cost is incurred in a multilevel write.…”
Section: Approximate Memoriesmentioning
confidence: 99%
“…In one proposal, Sampson et al [63] consider multilevel memories, such as multilevel phasechange memories [64], where the density is enhanced by storing more than one bit in a single memory cell. Multiple writes may be needed to ensure that the correct value is stored in the cell and hence a write latency cost and an energy cost is incurred in a multilevel write.…”
Section: Approximate Memoriesmentioning
confidence: 99%
“…Along with our colleagues Adrian Sampson, Jacob Nelson, and Luis Ceze, we observed that it is possible to increase memory density if the application can tolerate small errors. 24 Certain applications such as sensor data processing, machine learning, and image processing have error-tolerant data structures; these applications can produce acceptable output even if some bits of error-tolerant data structures are incorrect (Scenario 5 in Figure 1). Some of these applications can have large capacity needs so there is an incentive to increase density at the cost of errors.…”
Section: Static and Dynamic Selection Of Memory Operating Pointsmentioning
confidence: 99%
“…It includes arithmetic circuit design at the transistor and logic levels [3], approximate memory and storage [4] (including SRAM, DRAM and non-volatile memories), and various approximate processor architectures [5] (including neuron networks, general-purpose and reconfigurable processors such as instruction set architectures (ISAs), graphic processing units (GPUs) and FPGAs). Applications of AC have included image and signal processing, classification and recognition, machine learning, among others.…”
Section: Introductionmentioning
confidence: 99%