Proceedings of the Fourteenth EuroSys Conference 2019 2019
DOI: 10.1145/3302424.3303988
|View full text |Cite
|
Sign up to set email alerts
|

Runtime Object Lifetime Profiler for Latency Sensitive Big Data Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(13 citation statements)
references
References 34 publications
0
13
0
Order By: Relevance
“…Prior work predicts object lifetime as long or short based on allocation site and precisely matching calling context [11,16] (although Cohn and Singh did use stack data for predictions instead [17]). Current approaches typically store a table of allocation sites, together with a summary of observed persite lifetimes [13]. They either 1) collect lifetime information at runtime, i.e., dynamic pretenuring [16,30] or 2) use profileguided optimization (PGO), collecting lifetimes offline with special instrumentation, analyzing it offline, and then using it in deployment [11].…”
Section: Lifetime Prediction Challengesmentioning
confidence: 99%
See 2 more Smart Citations
“…Prior work predicts object lifetime as long or short based on allocation site and precisely matching calling context [11,16] (although Cohn and Singh did use stack data for predictions instead [17]). Current approaches typically store a table of allocation sites, together with a summary of observed persite lifetimes [13]. They either 1) collect lifetime information at runtime, i.e., dynamic pretenuring [16,30] or 2) use profileguided optimization (PGO), collecting lifetimes offline with special instrumentation, analyzing it offline, and then using it in deployment [11].…”
Section: Lifetime Prediction Challengesmentioning
confidence: 99%
“…Table 1 shows recording the calling stack for an allocation can take an order of magnitude longer than the allocation, which is problematic. Solutions include instrumenting the stack prologue and epilogue to keep track of the current stack through a series of bits stored in a register [12,13,29]. However, overheads of this approach are ≈6% and higher, exceeding all the time spent in memory allocation [31].…”
Section: Lifetime Prediction Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…However, none of those studies provides a characterization on such a wide range of managed applications (from both Dacapo and Renaissance benchmark suites) on top of NUMA machines. Finally, several profiling infrastructures [6,21,28,30] are available for managed runtimes. However, they are either task-specific (ROLP and FJProfiler) or not built for NUMA systems (JProfiler, AntTracks), as well as none of them supports hardware counters utilization.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, rather than reducing an algorithm's complexity (e.g. by decreasing the number of similarity computations), we propose to pursue an orthogonal strategy that is motivated by the system bottlenecks induced by large data volumes: large amounts of data not only stress complex algorithms, they also choke the underlying computation pipelines these algorithms execute on [12].…”
Section: Introductionmentioning
confidence: 99%