2010 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE 2010) 2010
DOI: 10.1109/date.2010.5457197
|View full text |Cite
|
Sign up to set email alerts
|

Energy-performance design space exploration in SMT architectures exploiting selective load value predictions

Abstract: Abstract-This paper presents a design space exploration of a selective load value prediction scheme suitable for energyaware Simultaneous Multi-Threaded (SMT) architectures. A load value predictor is an architectural enhancement which speculates over the results of a micro-processor load instruction to speedup the execution of the following instructions. The proposed architectural enhancement differs from a classic predictor due to an improved selection scheme that allows to activate the predictor only when a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 14 publications
0
8
0
Order By: Relevance
“…(All pipeline stages width=1). As can be seen in Fig.1 results show that although having bigger cache is one of the performance improvement approaches in embedded processors [5,[12][13][14] however by increasing the cache size over a threshold level, performance improvement is saturated and then, decreased. It's because , increasing the cache size, leads to more cache access delays and means that increasing the cache size always is not applicable as an approach to have better performance for embedded processors.…”
Section: A Performance Analysismentioning
confidence: 77%
“…(All pipeline stages width=1). As can be seen in Fig.1 results show that although having bigger cache is one of the performance improvement approaches in embedded processors [5,[12][13][14] however by increasing the cache size over a threshold level, performance improvement is saturated and then, decreased. It's because , increasing the cache size, leads to more cache access delays and means that increasing the cache size always is not applicable as an approach to have better performance for embedded processors.…”
Section: A Performance Analysismentioning
confidence: 77%
“…The target architecture is a superscalar Alpha AXP 21264 processor augmented with a direct mapped SLVP of 1024 entries, access latency of one cycle and prediction latency of three cycles [3]. It has a register file of [32 int/32 fp] * 8, a reorder buffer (ROB) of 128 entries and a load/store queue (LSQ) of 48 entries.…”
Section: Simulation Methodologymentioning
confidence: 99%
“…The SLVP was already used in our previous papers [1,3] and other load value predictors have been proposed in [7,8]. Different architectural support techniques for value prediction or energy efficient approaches that speculate on the results of load instructions in order to speed-up the execution are presented in [9,10].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, although cache is primarily used to overcome the performance gap between processor and main memory [5,6], however, researches show that in processors, the major part of energy is consumed in caches [6][7][8][9][10][11][12][13][14]. Hence, the methods which lead to optimum performance/power ratio for embedded processors will be applicable.…”
Section: Introductionmentioning
confidence: 99%