CCGrid 2003. 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid, 2003. Proceedings. 2003
DOI: 10.1109/ccgrid.2003.1199357
|View full text |Cite
|
Sign up to set email alerts
|

Discretionary caching for I/O on clusters

Abstract: I/O bottlenecks are already a problem in many large-scale applications that manipulate huge datasets. This problem is expected to get worse as applications get larger, and the I/O subsystem performance lags behind processor and memory speed improvements. At the same time, off-the-shelf clusters of workstations are becoming a popular platform for demanding applications due to their cost-effectiveness and widespread deployment. Caching I/O blocks is one effective way of alleviating disk latencies, and there can … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2005
2005
2014
2014

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 33 publications
0
18
0
Order By: Relevance
“…Nitzberg et al proposed collective buffering [29], and Ma et al proposed active buffering [27] to boost the output performance of many scientific applications. Vilayannur et al proposed discretionary caching for parallel I/O to use compilation and runtime support to bypass caching if the caching hurts the performance [43]. Eshel et al designed a cluster file system cache named Panache that exploits parallelism in many aspects of its design and has been proven effective and scalable as a parallel file system cache [12].…”
Section: B Parallel I/o Caching and Prefetchingmentioning
confidence: 99%
“…Nitzberg et al proposed collective buffering [29], and Ma et al proposed active buffering [27] to boost the output performance of many scientific applications. Vilayannur et al proposed discretionary caching for parallel I/O to use compilation and runtime support to bypass caching if the caching hurts the performance [43]. Eshel et al designed a cluster file system cache named Panache that exploits parallelism in many aspects of its design and has been proven effective and scalable as a parallel file system cache [12].…”
Section: B Parallel I/o Caching and Prefetchingmentioning
confidence: 99%
“…Since multiple clients share the same memory cache, its efficient utilization is clearly very critical. Since global caches have already been studied in the context of PVFS and it is not one of the contributions of this paper, we do not elaborate on our global cache implementation any further in this paper, except for saying that it closely follows the implementation presented in [33]. Our global cache management method employs a LRU (least-recently-used) policy with aging method to determine a best candidate for replacement as a result of a cache miss.…”
Section: Experimental Platform and Benchmarksmentioning
confidence: 99%
“…Since multiple CPUs (computation nodes) can share the same memory cache, its efficient utilization is clearly critical. Since global caches have already been studied in the context of PVFS and it is not one of the contributions of this paper, we do not elaborate on our PVFS-based global cache implementation any further in this paper, except for saying that it closely follows the implementation presented in [23]. Our global cache management method employs an LRU (least-recently-used) policy with aging method to determine the best candidate for replacement as a result of a cache miss.…”
Section: Methodsmentioning
confidence: 99%
“…Targeting multi-level caches, several multi-level buffer cache management policies have been proposed [43,40,23,14]. [40] introduced a DEMOTE operation where an evicted cache block is migrated to lower level of buffer cache.…”
Section: Related Workmentioning
confidence: 99%