2021
DOI: 10.1002/spe.2948
|View full text |Cite
|
Sign up to set email alerts
|

Providing high‐level self‐adaptive abstractions for stream parallelism on multicores

Abstract: Stream processing applications are common computing workloads that demand parallelism to increase their performance. As in the past, parallel programming remains a difficult task for application programmers. The complexity increases when application programmers must set nonintuitive parallelism parameters, that is, the degree of parallelism. The main problem is that state-of-the-art libraries use a static degree of parallelism and are not sufficiently abstracted for developing stream processing applications. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0
1

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 41 publications
0
8
0
1
Order By: Relevance
“…In parallel computing, many entities can be adapted at run-time [13], such as batches size [19] for increasing throughput, the cores frequency for reducing energy consumption [5] and dynamically changing the degree of parallelism [4,15,23]. However, these optimizations are arguably not flexible enough for the adaptations that real-world stream processing applications demand.…”
Section: Results Summarymentioning
confidence: 99%
See 2 more Smart Citations
“…In parallel computing, many entities can be adapted at run-time [13], such as batches size [19] for increasing throughput, the cores frequency for reducing energy consumption [5] and dynamically changing the degree of parallelism [4,15,23]. However, these optimizations are arguably not flexible enough for the adaptations that real-world stream processing applications demand.…”
Section: Results Summarymentioning
confidence: 99%
“…If not, goes to step 3. The decision-making strategy infers that two values are significantly different when they contrast higher than 20% (a threshold ), which is a configurable parameter that the used value of 20% was ascertained in previous work [23]. 3.…”
Section: Decision-making Strategymentioning
confidence: 99%
See 1 more Smart Citation
“…Most of them are data-centric using static template instantiation or dynamic runtime polymorphism to model data processing in a pipeline. To name a few popular examples: oneTBB [1] and TPL [20] require explicit definitions of input and output types for each stage; GrPPI [8] provides a composable abstraction for data-and streamparallel patterns with a pluggable back-end to support task scheduling; FastFlow [4] models parallel dataflow using predefined sequential and parallel building blocks; TTG [7] focuses on dataflow programming using various template optimization techniques; SPar [10][11][12]23] analyzes annotated attributes extracted from the data and stream parallelism domain, and automatically generates parallel patterns defined in FastFlow; Proteas [24] introduces a programming model for directive-based parallelization of linear pipeline; [34,35] propose self-adaptive mechanism to decide the degree of parallelism and generate the pattern compositions in FastFlow. These programming models, however, constrain users to design pipeline algorithms using their data models, making it difficult to use especially for applications that only need pipeline scheduling atop custom data structures.…”
Section: Related Workmentioning
confidence: 99%
“…Self-adaptiveness can also be used for providing additional parallelism abstractions to application programmers, which is a potential opportunity to simplify the process of running stream processing applications. 15,16 However, it is an open question to what extent self-adaptiveness is being applied and how efficient it is for providing parallelism abstractions. This work provides a systematic literature review (SLR) of self-adaptive approaches used in the stream processing scenario.…”
mentioning
confidence: 99%