2010 22nd International Symposium on Computer Architecture and High Performance Computing Workshops 2010
DOI: 10.1109/sbac-padw.2010.6
|View full text |Cite
|
Sign up to set email alerts
|

Effective Dynamic Scheduling on Heterogeneous Multi/Manycore Desktop Platforms

Abstract: GPUs (Graphics Processing Units) have become one of the main co-processors that contributed to desktops towards high performance computing. Together with multicore CPUs and other co-processors, a powerful heterogeneous execution platform is built on a desktop for data intensive calculations. In our perspective, we see the modern desktop as a heterogeneous cluster that can deal with several applications' tasks at the same time. To improve application performance and explore such heterogeneity, a distribution of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…This implementation approach is supported by runtime systems like StarPU [6], Harmony [20] and Merge [21], and dynamic scheduling algorithms [22], [23]. In contrast to these approaches, SEParAT offers tool support for the static scheduling, which is especially beneficial for regular applications with a static task structure.…”
Section: Related Workmentioning
confidence: 99%
“…This implementation approach is supported by runtime systems like StarPU [6], Harmony [20] and Merge [21], and dynamic scheduling algorithms [22], [23]. In contrast to these approaches, SEParAT offers tool support for the static scheduling, which is especially beneficial for regular applications with a static task structure.…”
Section: Related Workmentioning
confidence: 99%
“…Several task scheduling strategies for a multi-GPU platform based on a pool of tasks have been proposed [19,5,6,24]. However, the strategy of maintaining a pool of tasks that are randomly dispatched to idle processing units might perform poorly for applications with data dependencies and thus dependencies between tasks.…”
Section: Runtimesmentioning
confidence: 99%
“…Consequently, we performed some improvements on the algorithm, leading to the Algorithm 2. The improvement is partially based on [20] and its main concept was shortly exposed on [21]. It performs a swap on pairs of assignment provided by the Algorithm 1 and verifies if such swap can promote a gain in the total performance of the tasks.…”
Section: Dynamic Schedulermentioning
confidence: 99%