Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages &Amp; Application 2013
DOI: 10.1145/2509136.2509510
|View full text |Cite
|
Sign up to set email alerts
|

Efficient context sensitivity for dynamic analyses via calling context uptrees and customized memory management

Abstract: State-of-the-art dynamic bug detectors such as data race and memory leak detectors report program locations that are likely causes of bugs. However, programmers need more than static program locations to understand the behavior of increasingly complex and concurrent software. Dynamic calling context provides additional information, but it is expensive to record calling context frequently, e.g., at every read and write. Context-sensitive dynamic analyses can build and maintain a calling context tree (CCT) to tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 60 publications
0
7
0
Order By: Relevance
“…Table 1 shows recording the calling stack for an allocation can take an order of magnitude longer than the allocation, which is problematic. Solutions include instrumenting the stack prologue and epilogue to keep track of the current stack through a series of bits stored in a register [12,13,29]. However, overheads of this approach are ≈6% and higher, exceeding all the time spent in memory allocation [31].…”
Section: Lifetime Prediction Challengesmentioning
confidence: 99%
“…Table 1 shows recording the calling stack for an allocation can take an order of magnitude longer than the allocation, which is problematic. Solutions include instrumenting the stack prologue and epilogue to keep track of the current stack through a series of bits stored in a register [12,13,29]. However, overheads of this approach are ≈6% and higher, exceeding all the time spent in memory allocation [31].…”
Section: Lifetime Prediction Challengesmentioning
confidence: 99%
“…During profiling, we label objects with their allocation site at allocation time. We associate each allocation site with a unique identifier which the compiler creates when it first encounters each new bytecode during profiling, following prior work [28]. The compiler generates an allocation sequence that stores this identifier in the header of each object.…”
Section: Profilingmentioning
confidence: 99%
“…In a recent work [30], a novel data structure is proposed to avoid the node lookup operation in dynamic bug detectors. A new node is instead allocated for each context, and the costs of allocations are mitigated by extending the garbage collector not only to collect unused node but also to merge duplicate ones lazily.…”
Section: Related Workmentioning
confidence: 99%