Proceedings SUPERCOMPUTING '90
DOI: 10.1109/superc.1990.130037
|View full text |Cite
|
Sign up to set email alerts
|

Another view on parallel speedup

Abstract: In this paper three models of parallel speedup are studied. They are fized-size speedup, fized-time speedup and memoq-bounded speedup. Two sets of speedup formulations are derived for these three models. One set requires more information and gives more accurate estimation. Another set considers a simplified case and provides a clear picture of possible performance gain of parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 63 publications
(47 citation statements)
references
References 13 publications
0
47
0
Order By: Relevance
“…Because if there is no computation (i.e. application has pure I/O), no overlap Without KNOWAC prefetching With KNOWAC prefetching The scalability is tested based on the fixed-size scalability [33]. The number of I/O nodes is increased, but the input data remain the same.…”
Section: B Execution Time Improvementmentioning
confidence: 99%
“…Because if there is no computation (i.e. application has pure I/O), no overlap Without KNOWAC prefetching With KNOWAC prefetching The scalability is tested based on the fixed-size scalability [33]. The number of I/O nodes is increased, but the input data remain the same.…”
Section: B Execution Time Improvementmentioning
confidence: 99%
“…Figure 3, based on [17], illustrates the differences between Amdahl's and Gustafson's law. Amdahl assumes that the amount of work that can be parallelized, Wp, is constant and independent of the number of cores p. This can be considered overly pessimistic.…”
Section: Amdahl's and Related Lawsmentioning
confidence: 99%
“…It has been carefully examined in [4]. Average parallelism is equivalent to the maximum speedup S" [4,15]. S',…”
Section: Degree Of Parallelismmentioning
confidence: 99%
“…has been carefully studied in [15]. The structure of that study can be used as a guideline for other algorithms.…”
Section: Global Computationmentioning
confidence: 99%