1995
DOI: 10.1145/200836.200872
|View full text |Cite
|
Sign up to set email alerts
|

Bounds on the speedup and efficiency of partial synchronization in parallel processing systems

Abstract: In this paper, we derive bounds on the speedup and efficiency of applications that schedule tasks on a set of parallel processors. We assume that the application runs an algorithm that consists of N iterations and before starting its [ + 1st iteration, a processor must wait for data (i.e., synchronize) calculated in the ith iteration by a subset of the other processors of the system. Processing times and interconnections between iterations are modeled by random variables with possibly deterministic distributio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
10
0

Year Published

1996
1996
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 23 publications
1
10
0
Order By: Relevance
“…We also give a new proof of the constant bound for normally distributed tasks shown by Chang and Nelson [1]. Finally, we show that for certain power-law distributions, the time between tasks for a processor goes to infinity as the number of processors increases, even for neighbor synchronization, as long as the graph is strongly connected.…”
Section: Introductionmentioning
confidence: 71%
See 1 more Smart Citation
“…We also give a new proof of the constant bound for normally distributed tasks shown by Chang and Nelson [1]. Finally, we show that for certain power-law distributions, the time between tasks for a processor goes to infinity as the number of processors increases, even for neighbor synchronization, as long as the graph is strongly connected.…”
Section: Introductionmentioning
confidence: 71%
“…Furthermore, the expected time for a node along a path in the loop induced by this synchronization is less than or equal to the unconditional expectation of the task distribution, since processors that finish tasks earlier are included along more paths than those that finish later. Using these two facts, we can apply their supermartingale that sums along all possible paths and show that their bound applies here (for more details, see [1]. )…”
Section: The First-neighbor Modelmentioning
confidence: 99%
“…Issuing tasks in parallel or gang scheduling is tantamount to a fork, and barrier synchronization to await the completion of all tasks is a join. Bounds on speedup and efficiency are obtained in Chang and Nelson [1995], when partial synchronization is required. The analysis is complicated, as some iterations start while others are still in progress.…”
Section: Introductionmentioning
confidence: 99%
“…The efficiency by solving a recursive equation that depends on the distribution of task service times and the expected number of tasks needed to be synchronized, efficiency decreases with an increase in the number of processors [10]. Eliminating barrier synchronization [11] for compiler-parallelized codes on software distributed shared memory.…”
Section: Introductionmentioning
confidence: 99%