Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture 2019
DOI: 10.1145/3352460.3358282
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Multi-Resolution Data Storage

Abstract: Approximate computing that works on less precise data leads to significant performance gains and energy-cost reductions for compute kernels. However, without leveraging the full-stack design of computer systems, modern computer architectures undermine the potential of approximate computing. In this paper, we present Varifocal Storage, a dynamic multiresolution storage system that tackles challenges in performance, quality, flexibility and cost for computer systems supporting diverse application demands. Varifo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 76 publications
0
1
0
Order By: Relevance
“…The focus of our work is on accelerating large-scale GNN training with an ISP architecture. There is a large body of prior literature exploring in-storage/near-data processing [1], [5], [10], [12], [13], [16], [25], [27], [31]- [33], [35], [38], [38], [39], [42], [49], [51], [54], [58], [65], [67], [72]- [74], [76], [81], [81]- [83] or in-memory processing [2]- [4], [9], [18], [20], [23], [29], [34], [36], [37], [44], [45], [50], [60], [68], [79] architectures for data-intensive workloads as well as ASIC/FPGA/GPU based acceleration for graph neural networks [24], [40], [52], [54]-…”
Section: Related Workmentioning
confidence: 99%
“…The focus of our work is on accelerating large-scale GNN training with an ISP architecture. There is a large body of prior literature exploring in-storage/near-data processing [1], [5], [10], [12], [13], [16], [25], [27], [31]- [33], [35], [38], [38], [39], [42], [49], [51], [54], [58], [65], [67], [72]- [74], [76], [81], [81]- [83] or in-memory processing [2]- [4], [9], [18], [20], [23], [29], [34], [36], [37], [44], [45], [50], [60], [68], [79] architectures for data-intensive workloads as well as ASIC/FPGA/GPU based acceleration for graph neural networks [24], [40], [52], [54]-…”
Section: Related Workmentioning
confidence: 99%