2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing 2014
DOI: 10.1109/ccgrid.2014.11
|View full text |Cite
|
Sign up to set email alerts
|

HyCache+: Towards Scalable High-Performance Caching Middleware for Parallel File Systems

Abstract: Abstract-The ever-growing gap between the computation and I/O is one of the fundamental challenges for future computing systems. This computation-I/O gap is even larger for modern large scale high-performance systems due to their state-of-the-art yet decades long architecture: the compute and storage resources form two cliques that are interconnected with shared networking infrastructure. This paper presents a distributed storage middleware, called HyCache+, right on the compute nodes, which allows I/O to effe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 26 publications
(12 citation statements)
references
References 46 publications
0
12
0
Order By: Relevance
“…This architecture enables a high possibility of reusing the decompressed data, since the decompressed data are cached in the local node. In fact, prior work [18,19] shows that caching plays a significant impact to the overall performance of distributed and parallel file systems. Because the original compressed file is split into many logical chunks each of which can be decompressed independently, it allows a more flexible memory caching mechanism and parallel processing of these logical chunks.…”
Section: Discussionmentioning
confidence: 99%
“…This architecture enables a high possibility of reusing the decompressed data, since the decompressed data are cached in the local node. In fact, prior work [18,19] shows that caching plays a significant impact to the overall performance of distributed and parallel file systems. Because the original compressed file is split into many logical chunks each of which can be decompressed independently, it allows a more flexible memory caching mechanism and parallel processing of these logical chunks.…”
Section: Discussionmentioning
confidence: 99%
“…Another caching middleware is proposed by Zhao, Qiao, and Raicu [123]. They introduce a two-stage mechanism to decrease the amount of data to be transferred between processing and intermediate I/O nodes.…”
Section: Caching and Prefetchingmentioning
confidence: 99%
“…This Architecture remove the redundant data from transmission not only backup operation but also restore operation and improve the backup and restore performance and also reduce both the reduction ratio. Dongfang Zhao, et.al [1] This paper presents a distributed storage of middleware, Called as HyCache+, used compute nodes, which allows I/O to the high bi section bandwidth of the high speed interconnect to the parallel computing systems. HyCache+ gives the POSIX interface to end users with the memory class I/O throughput and latency, and transparently exchange the cached data with the existing slow speed but high capacity networked attached storage.…”
Section: Litearture Surveymentioning
confidence: 99%