2017
DOI: 10.1007/s11390-017-1680-8
|View full text |Cite
|
Sign up to set email alerts
|

A Lookahead Read Cache: Improving Read Performance for Deduplication Backup Storage

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 18 publications
1
6
0
Order By: Relevance
“…Pre-fetching and look-ahead schemes: In order to provide efficient caching for the deduplication enabled data, several prior works that adopts look-ahead and pre-fetching schemes were implemented [20], [21], [22], [24], [48]. However, these approaches are limited to localized (single-node) deduplication platforms.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Pre-fetching and look-ahead schemes: In order to provide efficient caching for the deduplication enabled data, several prior works that adopts look-ahead and pre-fetching schemes were implemented [20], [21], [22], [24], [48]. However, these approaches are limited to localized (single-node) deduplication platforms.…”
Section: Related Workmentioning
confidence: 99%
“…A lot of prior works have employed optimized pre-fetching and look-ahead caching schemes [20], [21], [22], [24], which amortize the read I/O penalty in deduplicated storage. However, these works are optimized for local node deduplication and not at cluster scale level.…”
Section: Introductionmentioning
confidence: 99%
“…Unlike the deduplication phase that searches the metadata (i.e., fingerprint indexes), the restore phase accesses the real data of the chunks according to the recipe. The restore performance suffers from the chunk fragmentation problem [8,9,13,16,20,30], i.e., the chunks of the same data stream are scattered into various containers, causing frequent disk accesses during the recoveries. The chunk fragmentation problem comes during the deduplication phase.…”
Section: Chunk Fragmentation Problemmentioning
confidence: 99%
“…The data grow exponentially in the widely used applications, such as IoT embeddings, artificial intelligence and cloud computing, which require efficient and large-scale storage capacities [7,17,18]. To save space and improve storage efficiency, data deduplication [31,41] becomes an efficient middleware to eliminate the duplicate data, and has been widely used in current storage systems [11, 24-26, 32, 39], especially for storage backup systems [14,19,30].…”
Section: Introductionmentioning
confidence: 99%
“…Usually, the dimension of the data is unified by the translation to standard deviation transformation and the shift to range transformation [37][38][39][40][41], and the fuzzy matrix in fuzzy clustering is obtained. If the degree of similarity which is the coefficient of similarity [42][43][44][45][46][47][48].…”
Section: Fuzzy Cluster Analysis Modelmentioning
confidence: 99%