Proceedings of 1995 1st IEEE Symposium on High Performance Computer Architecture
DOI: 10.1109/hpca.1995.386541
|View full text |Cite
|
Sign up to set email alerts
|

Thread prioritization: a thread scheduling mechanism for multiple-context parallel processors

Abstract: Multiple-context processors provide register resources that allow rapid context switching between several threads as a means of tolerating long communication and synchronization latencies. When scheduling threads on such a processor; we must first decide which threads should have their state loaded into the multiple contexts, and second, which loaded thread is to execute instructions at any given time. In this paper we show that both decisions are important, and that incorrect choices can lead to serious perfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Virtually all existing programming models and paradigms are resembled at some level of HTMT, such as multithreading [10], client-server, message-passing (MPI), or a Linda system [7]. This implies enormous work in designing, testing, and optimizing the system, including concurrency and parallelism [6], load balancing, task migration [5], memory distribution and control distribution.…”
Section: Related Workmentioning
confidence: 99%
“…Virtually all existing programming models and paradigms are resembled at some level of HTMT, such as multithreading [10], client-server, message-passing (MPI), or a Linda system [7]. This implies enormous work in designing, testing, and optimizing the system, including concurrency and parallelism [6], load balancing, task migration [5], memory distribution and control distribution.…”
Section: Related Workmentioning
confidence: 99%
“…Fine-grain thread parallelism is well suited to fill this performance gap, and well matched to the cluster organizations of future microprocessors. Most applications, even those with small problem sizes, have considerable fine-thread parallelism, and this parallelism, because of its limited extent, has a smaller cache footprint than coarse-thread alternatives [6].…”
Section: Relative Execution Timementioning
confidence: 99%