20th Annual International Conference on High Performance Computing 2013
DOI: 10.1109/hipc.2013.6799119
|View full text |Cite
|
Sign up to set email alerts
|

Adding data parallelism to streaming pipelines for throughput optimization

Abstract: The streaming model is a popular model for writing high-throughput parallel applications. A streaming application is represented by a graph of computation stages that communicate with each other via FIFO channels. In this paper, we consider the problem of mapping streaming pipelines -streaming applications where the graph is a linear chain -onto a set of computing resources in order to maximize its throughput. In a parallel setting, subsets of stages, called components, can be mapped onto different computing r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Streams that can be processed out of order are ideal candidates for the run-time to automatically parallelize. Li et al [35] describe algorithms for replicating kernels in a pipelined environment, both for homogeneous compute resources and for heterogeneous compute resources.…”
Section: Raftlib As a Research Platformmentioning
confidence: 99%
See 1 more Smart Citation
“…Streams that can be processed out of order are ideal candidates for the run-time to automatically parallelize. Li et al [35] describe algorithms for replicating kernels in a pipelined environment, both for homogeneous compute resources and for heterogeneous compute resources.…”
Section: Raftlib As a Research Platformmentioning
confidence: 99%
“…The placement of each kernel changes not only the throughput but also the latency of the overall application. In addition, it is often possible to replicate kernels (executing them in parallel) without altering the application semantics [35]. RaftLib exploits this ability to blend pipeline and data parallelism as well.…”
Section: Design Considerationsmentioning
confidence: 99%
“…Understanding this information is critical to understanding the secondary effects that each decision has for the performance of an application. Within a streaming data-flow graph, it is often possible to replicate kernels (executing them in parallel) to enhance performance without altering the application semantics (Li et al, 2013). RaftLib exploits this ability to extract more pipeline and task parallelism at runtime (dynamically) without further input from the programmer.…”
Section: Design Considerationsmentioning
confidence: 99%
“…This "batching" idea has been implemented in some streaming computing systems [11,31]. Note amortizing communication overhead is just one way of improving throughput of streaming application, which can also be optimized in many other ways [12,19].…”
Section: Introductionmentioning
confidence: 99%