ACM/IEEE SC 2006 Conference (SC'06) 2006
DOI: 10.1109/sc.2006.19
|View full text |Cite
|
Sign up to set email alerts
|

CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data

Abstract: Emerging large-scale distributed storage systems are faced with the task of distributing petabytes of data among tens or hundreds of thousands of storage devices. Such systems must evenly distribute data and workload to efficiently utilize available resources and maximize system performance, while facilitating system growth and managing hardware failures. We have developed CRUSH, a scalable pseudorandom data distribution function designed for distributed object-based storage systems that efficiently maps data … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
153
0
8

Year Published

2011
2011
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 246 publications
(161 citation statements)
references
References 17 publications
0
153
0
8
Order By: Relevance
“…In order to avoid it, Ceph has a high reliable centralized metadata server cluster and adopts Dynamic Subtree Partitioning [36] but preparing a lot of high reliable servers causes high cost. While Content Espresso stores chunks by striping to Chunk Servers, Ceph utilizes CRUSH [37], which reduces object data movement and achieves scalability by appending new OSDs. When new Chunk Servers are appended in Content Espresso, it is not necessary to move stored chunks to the new Chunk Servers because Content Espresso stores chunks evenly to all of the Chunk Servers.…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…In order to avoid it, Ceph has a high reliable centralized metadata server cluster and adopts Dynamic Subtree Partitioning [36] but preparing a lot of high reliable servers causes high cost. While Content Espresso stores chunks by striping to Chunk Servers, Ceph utilizes CRUSH [37], which reduces object data movement and achieves scalability by appending new OSDs. When new Chunk Servers are appended in Content Espresso, it is not necessary to move stored chunks to the new Chunk Servers because Content Espresso stores chunks evenly to all of the Chunk Servers.…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…In Ceph, the CRUSH Map [4] describes the layout of the storage cluster. The CRUSH ruleset is a method to select which disks make up a placement group.…”
Section: Crush Map and Rulesetmentioning
confidence: 99%
“…Clients interact with the metadata server to perform operations, such as open and rename, while communicating directly with the OSDs for I/O operations. The algorithm that is used to spread the data over the available OSDs is called CRUSH [18]. From a high level, Ceph clients and metadata servers view the object storage cluster, that consists of possibly tens or hundreds of thousands of OSDs, as a single logical object store and namespace.…”
Section: Motivationmentioning
confidence: 99%