2021
DOI: 10.3390/electronics10101188
|View full text |Cite
|
Sign up to set email alerts
|

Assessment of OpenMP Master–Slave Implementations for Selected Irregular Parallel Applications

Abstract: The paper investigates various implementations of a master–slave paradigm using the popular OpenMP API and relative performance of the former using modern multi-core workstation CPUs. It is assumed that a master partitions available input into a batch of predefined number of data chunks which are then processed in parallel by a set of slaves and the procedure is repeated until all input data has been processed. The paper experimentally assesses performance of six implementations using OpenMP locks, the tasking… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…In paper [5] All in all, for integration the best results were obtained for tasking and dynamic for with or without overlapping (for systems 2 and 1) while for image recognition for system 1 dynamic for and using locks while for system 2 dynamic for (both versions) and tasking with overlapping. Scalability and overheads during execution of parallel code is studied in more detail in [6] where authors distinguished 4 overhead categories such as: need for synchronization among threads, imbalance, limited parallelism i.e.…”
Section: Related Workmentioning
confidence: 94%
“…In paper [5] All in all, for integration the best results were obtained for tasking and dynamic for with or without overlapping (for systems 2 and 1) while for image recognition for system 1 dynamic for and using locks while for system 2 dynamic for (both versions) and tasking with overlapping. Scalability and overheads during execution of parallel code is studied in more detail in [6] where authors distinguished 4 overhead categories such as: need for synchronization among threads, imbalance, limited parallelism i.e.…”
Section: Related Workmentioning
confidence: 94%
“…Among the most popular languages and libraries currently used in high-load computing, fine-grained parallelism tools are implemented in the OpenMP library (which is usually used in combination with C++), Java and C# languages [11][12][13][14].…”
Section: Fine-grained Parallelism Implementations Overviewmentioning
confidence: 99%