2009
DOI: 10.1016/j.future.2008.09.006
|View full text |Cite
|
Sign up to set email alerts
|

A new paradigm: Data-aware scheduling in grid computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
54
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 117 publications
(54 citation statements)
references
References 7 publications
0
54
0
Order By: Relevance
“…Algorithm was capable to adjust the resource allocation dynamically, based on the updated information of the actual task executions. Efficient and reliable access limitations of traditional CPU-oriented batch schedulers were discussed by [21]. To improve the performance, authors proposed Stork data placement scheduler in data intensive computing.…”
Section: B Study On Performance Toolsmentioning
confidence: 99%
“…Algorithm was capable to adjust the resource allocation dynamically, based on the updated information of the actual task executions. Efficient and reliable access limitations of traditional CPU-oriented batch schedulers were discussed by [21]. To improve the performance, authors proposed Stork data placement scheduler in data intensive computing.…”
Section: B Study On Performance Toolsmentioning
confidence: 99%
“…STORK (Kosar & Livny 2004;Kosar & Balman 2009) engages with CONDOR (Thain et al 2005) to provide a new data placement scheduler for bulk data transfer jobs that are fault-tolerant batch processes that can be queued, scheduled, monitored, managed and check-pointed. STORK supports transfers between local file systems, GRIDFTP, FTP, HTTP, SRB, NeST and SRM.…”
Section: Existing Technologies and Servicesmentioning
confidence: 99%
“…Nevertheless, as stated by Kosar & Balman (2009), these applications and their successors represent initial steps towards data-intensive computing and data-aware batch scheduling. This is a newly emerging paradigm where many challenges remain, which includes the emergence of standards for describing data transfer/copying activities.…”
Section: Existing Technologies and Servicesmentioning
confidence: 99%
“…in parameter sweep applications [2,3], the scheduling problems in Computational Grids (CGs) and in Data Grids (DGs) is dealing with in a separated way. Much of the current efforts are focused on scheduling workloads in a data center or schedule movement of data and data placement [42] for efficient resource/storage utilization or energy-effective scheduling in largescale data centers [41], [8], [33], [18], [48], [51], [57], [7], [10], [16], [17]. A recent example is that of GridBatch [44] for large scale data-intensive problems on cloud infrastructures.…”
Section: Introductionmentioning
confidence: 99%