1998
DOI: 10.1006/jpdc.1998.1429
|View full text |Cite
|
Sign up to set email alerts
|

Randomized Priority Queues for Fast Parallel Access

Abstract: Applications like parallel search or discrete event simulation often assign priority or importance to pieces of work. An e ective w a y to exploit this for parallelization is to use a priority queue data structure for scheduling the work; but a bottleneck free implementation of parallel priority queue access by many processors is required to make this approach scalable. We present simple and portable randomized algorithms for parallel priority queues on distributed memory machines with fully distributed storag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
40
0

Year Published

2000
2000
2015
2015

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(40 citation statements)
references
References 11 publications
0
40
0
Order By: Relevance
“…For example, diffusion-based load balancing methods [4,1,12] are a simple and robust distributed approach for this purpose. Even centralised algorithms based on global prioritisation can be made scalable using distributed priority queues [13]. Very good load balancing can be achieved by a combination of randomisation and redundancy, using fully distributed and fast algorithms (for example, [14]).…”
Section: Architectural Challenges For the Support Of Invasive Computingmentioning
confidence: 99%
“…For example, diffusion-based load balancing methods [4,1,12] are a simple and robust distributed approach for this purpose. Even centralised algorithms based on global prioritisation can be made scalable using distributed priority queues [13]. Very good load balancing can be achieved by a combination of randomisation and redundancy, using fully distributed and fast algorithms (for example, [14]).…”
Section: Architectural Challenges For the Support Of Invasive Computingmentioning
confidence: 99%
“…Altough we will show in Section 3.4 that this can be implemented without increasing the asymptotic running-time, it must be expected that substantial communication overhead will occur if many processors want to acquire the identifier of a free processor simultaneously. Further, the second phase of Algorithm PHF requires global communication in each iteration; effectively, the described implementation simulates a specialized parallel priority queue (see [2,9,23] for further information on parallel priority queues) that allows selection of the min[h, f] heaviest remaining subproblems. While this overhead may be small on parallel machines with high-bandwidth and low-latency interconnection networks, it is likely to limit the speed-up achievable with this algorithm in practice on less powerful platforms, such as networks of workstations.…”
Section: Parallelizing Algorithm Hfmentioning
confidence: 99%
“…A fresh look at priority queues has appeared in the literature recently. See, for instance, [2,3,21,25]. From what is available in the literature, it is almost clear that standard available methods to solve multi-server queues will not result in a closed form solution for the problem described above.…”
Section: Introductionmentioning
confidence: 95%
“…Priority queues have been studies extensively and started in the early days of queueing theory. See [4,14,25], for example. A fresh look at priority queues has appeared in the literature recently.…”
Section: Introductionmentioning
confidence: 99%