Proceedings of the 42nd Annual International Symposium on Computer Architecture 2015
DOI: 10.1145/2749469.2750392
|View full text |Cite
|
Sign up to set email alerts
|

Profiling a warehouse-scale computer

Abstract: With the increasing prevalence of warehouse-scale (WSC) and cloud computing, understanding the interactions of server applications with the underlying microarchitecture becomes ever more important in order to extract maximum performance out of server hardware. To aid such understanding, this paper presents a detailed microarchitectural analysis of live datacenter jobs, measured on more than 20,000 Google machines over a three year period, and comprising thousands of different applications.We first find that WS… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
106
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 303 publications
(112 citation statements)
references
References 42 publications
6
106
0
Order By: Relevance
“…A very slight correlation was also found between the Fourier parameters and the O-C values, suggesting that the O-C variations (up to 30 min) might be due to instabilities in the light curve shape. This correlation was strengthened by the analysis of the Kepler measurements up to Q17 [4].…”
Section: Cepheidsmentioning
confidence: 91%
“…A very slight correlation was also found between the Fourier parameters and the O-C values, suggesting that the O-C variations (up to 30 min) might be due to instabilities in the light curve shape. This correlation was strengthened by the analysis of the Kepler measurements up to Q17 [4].…”
Section: Cepheidsmentioning
confidence: 91%
“…Solutions include instrumenting the stack prologue and epilogue to keep track of the current stack through a series of bits stored in a register [12,13,29]. However, overheads of this approach are ≈6% and higher, exceeding all the time spent in memory allocation [31]. We solve these problems by using stack height and object size for per-site prediction and cache lookups.…”
Section: Lifetime Prediction Challengesmentioning
confidence: 99%
“…Overheads. Continuous profiling in deployment is not practical because it adds 6% overhead [13,42], which can be more than memory allocation itself [31].…”
Section: Introductionmentioning
confidence: 99%
“…As memory latency is a critical performance determinant for datacenter workloads [25,39], the dramatic increase in AMAT caused by replacing DRAM with SCM will directly manifest itself in endto-end performance degradation. Therefore, we begin by asking the question: by how much will performance degrade from simply replacing the memory?…”
Section: Workload Compatibility With Scmmentioning
confidence: 99%
“…Emerging storage-class memory (SCM) technologies are a prime candidate to serve as the next generation of main memory, as they boast approximately an order of magnitude greater density than DRAM at a lower cost per bit [31,64,78,90]. These traits come at the price of elevated access latency compared to DRAM, creating new challenges for systems designers as memory latency is a critical factor in datacenter application performance [39]. Given that typical SCM latencies are 4-100× greater than DRAM [69,77], and that SCM devices often have write latencies 2-10× longer than reads, naïvely and completely replacing DRAM with SCM is an unacceptable compromise for datacenter operators.…”
Section: Introductionmentioning
confidence: 99%