2002
DOI: 10.1145/635506.605403
|View full text |Cite
|
Sign up to set email alerts
|

Automatically characterizing large scale program behavior

Abstract: Understanding program behavior is at the foundation of computer architecture and program optimization. Many programs have wildly different behavior on even the very largest of scales (over the complete execution of the program). This realization has ramifications for many architectural and compiler techniques, from thread scheduling, to feedback directed optimizations, to the way programs are simulated. However, in order to take advantage of time-varying behavior, we must first develop the analytical tools nec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
170
0
1

Year Published

2005
2005
2020
2020

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 166 publications
(171 citation statements)
references
References 13 publications
0
170
0
1
Order By: Relevance
“…Antoniol et al [20] used text mining to separate bug reports from feature requests. More generally, approaches as presented by Sherwood et al [21] and Bowring et al [22] automatically classify program behavior using execution data. In contrast, the work presented in this paper, uses test step failure patterns to automatically classify whether test failures report code defects or are due to test and infrastructure issues.…”
Section: A Classifying Program Failures and Behaviormentioning
confidence: 99%
“…Antoniol et al [20] used text mining to separate bug reports from feature requests. More generally, approaches as presented by Sherwood et al [21] and Bowring et al [22] automatically classify program behavior using execution data. In contrast, the work presented in this paper, uses test step failure patterns to automatically classify whether test failures report code defects or are due to test and infrastructure issues.…”
Section: A Classifying Program Failures and Behaviormentioning
confidence: 99%
“…However, this approach is limited; while it may allow one to more quickly simulate full workloads, it does not solve the problem of simulating that workload on a detailed model when an architect is negotiating finer details and trade-offs. A more favorable approach to increase analysis throughput is to use SimPoint [4] to distill out the important and often repetitive phases of benchmarks, experiment on those phases, and then project workload performance from that small subset of the workload. This approach allows an architect to simulate effectively full workloads on any type of microarchitecture model, regardless of detail, in a more tractable amount of time.…”
Section: Introductionmentioning
confidence: 99%
“…However we do not expect that the result will differ much. Reference input sets, fast-forwarded to the first Simpoint [21] [22] with cache and branch predictor warm-up, and a detailed run of 200M are used. Detailed runs of 2B cycles after fast-forwarding are used for the IPC impact study.…”
Section: F Relationship To Prior Workmentioning
confidence: 99%