Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Syste 2020
DOI: 10.1145/3373376.3378450
|View full text |Cite
|
Sign up to set email alerts
|

Accelerometer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 58 publications
(4 citation statements)
references
References 44 publications
0
4
0
Order By: Relevance
“…Existing benchmarks that are generally used to design processors do not accurately represent the workloads at hyperscalars. As highlighted by several previous works [3,5,20,36,35], cloud workloads exhibit fundamentally different IPC, cache miss rates, and other metrics compared to standard benchmarks. These factors make the SPEC and other commercially available benchmarks a bad proxy for studying the performance of server processors in a datacenter.…”
Section: Code Memory Bw and Latency Challenges In Cloud Datacen-tersmentioning
confidence: 97%
See 2 more Smart Citations
“…Existing benchmarks that are generally used to design processors do not accurately represent the workloads at hyperscalars. As highlighted by several previous works [3,5,20,36,35], cloud workloads exhibit fundamentally different IPC, cache miss rates, and other metrics compared to standard benchmarks. These factors make the SPEC and other commercially available benchmarks a bad proxy for studying the performance of server processors in a datacenter.…”
Section: Code Memory Bw and Latency Challenges In Cloud Datacen-tersmentioning
confidence: 97%
“…They observe similar IPC, cache, and TLB MPKIs. Accelerometer [35] studies Meta's cloud workloads, finding most cycles spent on non-core application tasks, such as compression and serialization. Some earlier works [40,7] have also looked at µ-arch improvements for cloud workloads.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, a large number of server-class CPUs are (and will be) used in datacenters to process High Performance Computing (HPC) workloads, and adding/maintaining additional accelerators just for DL workloads would increase complexity and cost [18]. Furthermore, offloading a modest-size task to an accelerator or a GPU might not be the best option if the offloading overhead is relatively substantial [48]. Finally, § Sana Damani and Eric Qin are now at NVIDIA and Meta, respectively.…”
Section: Introductionmentioning
confidence: 99%