18th International Parallel and Distributed Processing Symposium, 2004. Proceedings.
DOI: 10.1109/ipdps.2004.1303042
|View full text |Cite
|
Sign up to set email alerts
|

Replication under scalable hashing:a family of algorithms for scalable decentralized data distribution

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0
1

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(53 citation statements)
references
References 19 publications
0
52
0
1
Order By: Relevance
“…The first methods with dedicated support for replication were proposed by Honicky and Miller [Honicky and Miller 2003] [Honicky and Miller 2004]. RUSH (Replication Under Scalable Hashing) maps replicated objects to a scalable collection of storage servers according to user-specified server weighting.…”
Section: Previous Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The first methods with dedicated support for replication were proposed by Honicky and Miller [Honicky and Miller 2003] [Honicky and Miller 2004]. RUSH (Replication Under Scalable Hashing) maps replicated objects to a scalable collection of storage servers according to user-specified server weighting.…”
Section: Previous Resultsmentioning
confidence: 99%
“…The RUSH algorithms all proceed in two stages, first identifying the appropriate cluster in which to place an object, and then identifying the disk within a cluster [Honicky and Miller 2003] [Honicky and Miller 2004]. Within a cluster, replicas assigned to the cluster are mapped to disks using prime number arithmetic that guarantees that no two replicas of a single object can be mapped to the same disk.…”
Section: Rushmentioning
confidence: 99%
“…The first methods with dedicated support for replication were proposed by Honicky and Miller [14] [15]. RUSH (Replication Under Scalable Hashing) maps replicated objects to a scalable collection of storage servers according to userspecified server weighting.…”
Section: B Previous Resultsmentioning
confidence: 99%
“…Although in our experiments we allocate data from scratch when new disks arrive, it should be pointed out that a number of algorithms have been proposed and implemented which are able to perform a reorganization with minimum overhead (please see, e.g., [19,20,21,5]). …”
Section: Dynamically Increasing the Number Of Binsmentioning
confidence: 99%