2016
DOI: 10.12694/scpe.v17i1.1148
|View full text |Cite
|
Sign up to set email alerts
|

Many-Task Computing on Many-Core Architectures

Abstract: Abstract. Many-Task Computing (MTC) is a common scenario for multiple parallel systems, such as cluster, grids, cloud and supercomputers, but it is not so popular in shared memory parallel processors. In this sense and given the spectacular growth in performance and in number of cores integrated in many-core architectures, the study of MTC on such architectures is becoming more and more relevant. In this paper, authors present what are those programming mechanisms to take advantages of such massively parallel … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
13
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 33 publications
0
13
0
Order By: Relevance
“…This option is motivated by the work of Raicu and his coworkers [43] who find that an MTC application can be efficiently run on a cluster computer. The authors of [53] assess how efficiently MTC applications run on other computer architectures. A cluster computer consists of a large number of so-called personal computers (PCs) that are connected to each other through high speed interconnects.…”
Section: Methodsmentioning
confidence: 99%
“…This option is motivated by the work of Raicu and his coworkers [43] who find that an MTC application can be efficiently run on a cluster computer. The authors of [53] assess how efficiently MTC applications run on other computer architectures. A cluster computer consists of a large number of so-called personal computers (PCs) that are connected to each other through high speed interconnects.…”
Section: Methodsmentioning
confidence: 99%
“…Second approach is functional classification [22]. There are three types of distributed computing systems due to the type of processing: High-throughput computing [28], high-performance computing [29] and multi-task computing [30]. However, the difference between high-performance and multi-task computing is not strict [22].…”
Section: Functional Classificationmentioning
confidence: 99%
“…The process concentrates on cooperation with the file system. Examples of the use of this type of system are: High-performance, distributed databases [22] or search engines [30].…”
mentioning
confidence: 99%
“…Our code is able to compute a high number of systems (neurons) of any size in one call (CUDA kernel), using one thread per Hines system instead of one CUDA block per system. Although multiple works have explored the use of GPUs to compute multiple independent problems in parallel without transforming the data layout [14][15][16][17], the particular characteristics of the sparsity of the Hines matrices force us to modify the data layout to efficiently exploit the memory hierarchy of the GPUs (coalescing accesses to GPU memory).…”
Section: Motivationmentioning
confidence: 99%