International Symposium on Code Generation and Optimization, 2003. CGO 2003.
DOI: 10.1109/cgo.2003.1191541
|View full text |Cite
|
Sign up to set email alerts
|

Improving quasi-dynamic schedules through region slip

Abstract: Modern processors perform dynamic scheduling to achieve better utilization of execution resources. A schedule created at run-time is often better than one created at compile-time as it can dynamically adapt to specific events encountered at execution-time. In this paper, we examine some fundamental impediments to effective static scheduling. More specifically, we examine the question of why schedules generated quasi-dynamically by a low-level runtime optimizer and executed on a statically scheduled machine per… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
10
0

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…Since there are multiple queues, instructions from different LRR blocks can be overlapped (Figure 4). Instructions in each basic block are therefore issued in their statically scheduled order but are overlapped with instructions from successive basic blocks (Note, the LR queue is conceptually similar to the region-slip-enabled issue buffer proposed by Spadini et al [26]. Their proposed mechanism uses a FIFO-based issue buffer that allows a block's schedule to 'slip' into the schedule of a previous block.…”
Section: Reorder-sensitive Issue Logicmentioning
confidence: 98%
“…Since there are multiple queues, instructions from different LRR blocks can be overlapped (Figure 4). Instructions in each basic block are therefore issued in their statically scheduled order but are overlapped with instructions from successive basic blocks (Note, the LR queue is conceptually similar to the region-slip-enabled issue buffer proposed by Spadini et al [26]. Their proposed mechanism uses a FIFO-based issue buffer that allows a block's schedule to 'slip' into the schedule of a previous block.…”
Section: Reorder-sensitive Issue Logicmentioning
confidence: 98%
“…The out-of-order pipeline schedules and renames the instructions. However, there is an in-order rePLay [16], where the instructions in a trace are scheduled and renamed after optimization, and executed in an in-order pipeline. It uses a conventional register file, and a trace needs to record its live-in and live-out registers.…”
Section: A Caching Proposalsmentioning
confidence: 99%
“…We believe the DTSVLIW architecture to be simpler than that of DIF and easier to implement. Work on the DIF appears to have ceased, while it is not clear how rePLay could be extended to multi-threading: the rePLay paper [9] states that the scheduler can be hardware or software based, and the hardware scheduler takes 10 clock cycles for each instruction scheduled, which indicates that the scheduler does not lie in the main execution path. In the DTSVLIW, scheduling occurs in a scalar mode of execution with the scheduler designed to process 1 instruction per cycle, although this may not be achieved consistently because of delays in the arrival of instructions at the scheduler due to latency issues elsewhere.…”
mentioning
confidence: 97%
“…A number of architectures perform dynamic code scheduling of the input code stream to identify concurrent code sequences, as "a schedule created at run-time is often better than one created at compile time" [9]. Thus DIF [10], DTSVLIW [11][12][13][14][15][16], and rePLay [9] architectures are all single threaded ones that do dynamic code scheduling on a single process.…”
mentioning
confidence: 99%
See 1 more Smart Citation