2016
DOI: 10.1007/s10766-016-0463-0
|View full text |Cite
|
Sign up to set email alerts
|

Data-Locality Aware Scientific Workflow Scheduling Methods in HPC Cloud Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 10 publications
0
7
0
1
Order By: Relevance
“…The study, however, disregards network effects, assuming that data flow between co-hosted virtual machines is equally efficient as local data access. Choi et al [39] present a mechanism for locality-aware resource management in High-Performance Computing (HPC) cloud environments, called Data-Locality Aware Workflow Scheduling (D-LAWS). Their solution condenses virtual machines and includes task parallelism through data flow into the task execution planning of a data-intensive scientific process.…”
Section: Related Workmentioning
confidence: 99%
“…The study, however, disregards network effects, assuming that data flow between co-hosted virtual machines is equally efficient as local data access. Choi et al [39] present a mechanism for locality-aware resource management in High-Performance Computing (HPC) cloud environments, called Data-Locality Aware Workflow Scheduling (D-LAWS). Their solution condenses virtual machines and includes task parallelism through data flow into the task execution planning of a data-intensive scientific process.…”
Section: Related Workmentioning
confidence: 99%
“…In both experiments, we note patterns of unnecessary spread of tasks among nodes with both engines at times. This is something to keep in mind when working with large data batches, as it is desirable to minimize data movement between nodes 40,41 . A helpful directive to this effect in Nextflow is scratch, and in Cromwell is localization_optional.…”
Section: Scalabilitymentioning
confidence: 99%
“…() WaComM uses a hybrid approach, based on Eulerian‐Lagrangian models, implemented using a heterogeneous parallel approach . WaComM requires high performance computing capabilities to be able to provide near real‐time results,() impacting the on‐premise total cost of ownership . The operational costs derived from such expensive infrastructures could be mitigated by the GPU offloader, running the CUDA applications in dedicated remote commodity GPU accelerator servers.…”
Section: Real Use Case Scenariosmentioning
confidence: 99%