Proceedings of the 2003 ACM/IEEE Conference on Supercomputing 2003
DOI: 10.1145/1048935.1050160
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Reduction Computations In a Distributed Environment

Abstract: We investigate runtime strategies for data-intensive applications that involve generalized reductions on large, distributed datasets. Our set of strategies includes replicated filter state, partitioned filter state, and hybrid options between these two extremes. We evaluate these strategies using emulators of three real applications, different query and output sizes, and a number of configurations. We consider execution in a homogeneous cluster and in a distributed environment where only a subset of nodes host… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2005
2005
2014
2014

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…In our experience with the filter-stream programming model, most applications are bottleneck free, and the number of active internal tasks are higher than the available processors [8,11,17,35,40,43]. Thus, the proposed approach to exploit heterogeneous resources consists of allocating multiple tasks concurrently to processors where they will perform the best, as detailed in Sect.…”
Section: Motivating Applicationmentioning
confidence: 99%
“…In our experience with the filter-stream programming model, most applications are bottleneck free, and the number of active internal tasks are higher than the available processors [8,11,17,35,40,43]. Thus, the proposed approach to exploit heterogeneous resources consists of allocating multiple tasks concurrently to processors where they will perform the best, as detailed in Sect.…”
Section: Motivating Applicationmentioning
confidence: 99%
“…Although more complex hybrid solutions to the decomposition of the input and output data can be developed [12], their performance is typically close to that of the outputpartitioned scheme when the output size is large. Also since the input-partitioned and output-partitioned parallelization schemes represent the extremes of the SAR image formation pipeline design space and constitutes a good base cases for comparisons, in this work we only develop and present these two parallelization schemes.…”
Section: Figure 4 Sar Imaging Input Partitioningmentioning
confidence: 99%
“…Our work is distinct in considering a higher-level language and virtual view of the datasets. Kurc et al have examined different runtime strategies for supporting reductions in a distributed environment [6]. We focus on supporting a high-level language, but currently have implemented only a single strategy.…”
Section: Related Workmentioning
confidence: 99%