2009 15th International Conference on Parallel and Distributed Systems 2009
DOI: 10.1109/icpads.2009.130
|View full text |Cite
|
Sign up to set email alerts
|

Speculation with Little Wasting: Saving Cost in Software Speculation through Transparent Learning

Abstract: Software speculation has shown promise in parallelizing programs with coarse-grained dynamic parallelism. However, most speculation systems use offline profiling for the selection of speculative regions. The mismatch with the inputsensitivity of dynamic parallelism may result in large numbers of speculation failures in many applications. Although with certain protection, the failed speculations may not hurt the basic efficiency of the application, the wasted computing resource (e.g. CPU time and power consumpt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2013
2013

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Seminal behaviors are extracted automatically to support proactive optimizations in the Jikes VM. A similar approach, developed in [10], trades training for incremental adaptation across production runs. The authors of [10] apply this idea to predict the likelihood of successful speculation, where predictions account for input properties indirectly using classification trees.…”
Section: Related Workmentioning
confidence: 97%
See 1 more Smart Citation
“…Seminal behaviors are extracted automatically to support proactive optimizations in the Jikes VM. A similar approach, developed in [10], trades training for incremental adaptation across production runs. The authors of [10] apply this idea to predict the likelihood of successful speculation, where predictions account for input properties indirectly using classification trees.…”
Section: Related Workmentioning
confidence: 97%
“…A similar approach, developed in [10], trades training for incremental adaptation across production runs. The authors of [10] apply this idea to predict the likelihood of successful speculation, where predictions account for input properties indirectly using classification trees. Tightfit shares the motivation of these works, but supports direct learning of the relationship between input features and parallelism, rather than passing through seminal behaviors, thanks to offline learning.…”
Section: Related Workmentioning
confidence: 97%
“…Speculation is based on the fact that the program behavior is usually repetitive and consequently predictable, as demonstrated in studies of branches , memory dependencies, and data values . Software speculation has recently shown promising results in parallelizing such programs . The relevant techniques can be classified into two types: Software speculation: Compilers carry out the necessary coding.…”
Section: Speculative Parallelization Architecture For Computer Clustersmentioning
confidence: 99%