2013 IEEE International Symposium on Parallel &Amp; Distributed Processing, Workshops and PHD Forum 2013
DOI: 10.1109/ipdpsw.2013.105
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Loop Self-Scheduling Schemes Implemented on Large-Scale Clusters

Abstract: Loops are the largest source of parallelism in many scientific applications. Parallelization of irregular loop applications is a challenging problem to achieve scalable performance on large-scale multi-core clusters. Previous research proposed an effective Master-Worker model on clusters for distributed selfscheduling schemes that apply to parallel loops with independent iterations. However, this model has not been applied to large-scale clusters. In this paper, we present an extension of the distributed self-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…We use the following two applications in this implementation [7]. The outer loops in these applications are partitioned using scheduling and the tasks are assigned to workers.…”
Section: Large Scale Clustermentioning
confidence: 99%
See 1 more Smart Citation
“…We use the following two applications in this implementation [7]. The outer loops in these applications are partitioned using scheduling and the tasks are assigned to workers.…”
Section: Large Scale Clustermentioning
confidence: 99%
“…The hierarchical scheme is based on a supermaster/master/worker model which can reduce the communication overhead and synchronization overhead. Preliminary results have been published in [6,7]. We implemented these schemes on a large-scale cluster of Texas Advanced Computing Center, University of Texas at Austin.…”
Section: Introductionmentioning
confidence: 99%
“…Workloads are one of the important sources of parallelism in scientific computing programs and therefore a lot of research was focused in this area (Wu et al, 2012). A step's workload is called a parallelizable workload if there is no data dependency among all steps, i.e., workloads can be processed in any order or even simultaneously (Han and Chronopoulos, 2013). The order of workload can be roughly divided into four kinds as shown in Fig.…”
Section: Lps and Swmmentioning
confidence: 99%
“…Self-scheduling is a dynamic scheduling technique, through which idling processors can autonomously access a global data structure to obtain additional tasks (Hu et al, 2010). Various selfscheduling schemes, such as pure self-scheduling (PSS), factoring self-scheduling (FSS), guided self-scheduling (GSS), and trapezoid self-scheduling (TSS), have been proven successful for shared memory multiprocessor systems (Han and Chronopoulos, 2013). All selfscheduling schemes except PSS reduce dynamic scheduling overhead by reducing atomic operation times.…”
Section: Lps and Swmmentioning
confidence: 99%