Proceedings of the 2006 ACM/IEEE Conference on Supercomputing - SC '06 2006
DOI: 10.1145/1188455.1188582
|View full text |Cite
|
Sign up to set email alerts
|

Grid resource management---CRUSH

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2008
2008
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 134 publications
(19 citation statements)
references
References 16 publications
0
19
0
Order By: Relevance
“…Within a pool, the objects are sharded among aggregation units called placement groups (PGs). Depending on the replication factor, PGs are mapped to multiple OSDs using CRUSH, a pseudo-random data distribution algorithm [104]. Clients also use CRUSH to determine the OSD that should contain a given object, obviating the need for a centralized metadata service.…”
Section: Rados Scales To Thousands Of Object Storagementioning
confidence: 99%
“…Within a pool, the objects are sharded among aggregation units called placement groups (PGs). Depending on the replication factor, PGs are mapped to multiple OSDs using CRUSH, a pseudo-random data distribution algorithm [104]. Clients also use CRUSH to determine the OSD that should contain a given object, obviating the need for a centralized metadata service.…”
Section: Rados Scales To Thousands Of Object Storagementioning
confidence: 99%
“…Within a pool, the objects are sharded among aggregation units called placement groups (PGs). Depending on the replication factor, PGs are mapped to multiple OSDs using CRUSH, a pseudo-random data distribution algorithm [99]. Clients also use CRUSH to determine the OSD that should contain a given object, obviating the need for a centralized metadata service.…”
Section: Objectsmentioning
confidence: 99%
“…Second, the object gateway poses a network bottleneck for clients writing to the Ceph cluster. One of the principal advantages of Ceph is that clients can write directly to OSDs by calculating where to store and retrieve data deterministically, via the CRUSH algorithm [20] and the cluster map, without being burdened by the network bottleneck of first connecting to a metadata server. By interposing itself between clients and OSDs, the Ceph Object Gateway re-introduces a network bottleneck, inhibiting performance.…”
Section: Designmentioning
confidence: 99%