2004
DOI: 10.1007/978-3-540-30192-9_11
|View full text |Cite
|
Sign up to set email alerts
|

Load Distribution for Distributed Stream Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2008
2008
2014
2014

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…Another recent paper that comes out from Aurora project which is very relevant to this paper is the dynamic load distribution strategy [22]. By and large, the goal of query optimization is to map operators efficiently to resources in a distributed environment.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Another recent paper that comes out from Aurora project which is very relevant to this paper is the dynamic load distribution strategy [22]. By and large, the goal of query optimization is to map operators efficiently to resources in a distributed environment.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, some streams may get processed significantly slower than the rest. Most previous work on streaming data processing assign operators to each processing unit in a static or semi-dynamic fashion [4,6,13,21,22]. By doing worst case analysis at compilation and/or deployment, the system may be over-provisioned but has a reasonable chance to make the desired service level.…”
Section: Introductionmentioning
confidence: 98%
“…In [36] a decentralized scheme based on load coalescing is proposed. This work has a similar aim with our work for minimizing communication delays, however, it takes a different direction as we focus on a centralized approach with a fast allocation algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…We aim to reduce the communication latency while keeping the nodes below a computational load threshold. Although previous work [11,36,26] suggests decentralized task allocation schemes for avoiding bottlenecks, we argue that a centralized approach can be efficient, as long as we design fast allocation algorithms. We thus devise on fast task allocation heuristics, while keeping the quality of the resulting allocation high.…”
Section: Introductionmentioning
confidence: 99%
“…As in [11], [12] assumes homogeneous machines and perfect networks; on the other hand, it uses CPU capacity to reflect resource constraint and operator clustering as a preprocessing step to prevent costly data crossing the network. [10] attempts to consider the situations where the network transfer delay cannot be ignored by grouping neighboring operators in the initial mapping. When adjustment is necessary, only operators at the boundary are migrated to the neighboring host, so as to avoid creating excessive network traffic.…”
Section: Related Workmentioning
confidence: 99%