2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks 2014
DOI: 10.1109/dsn.2014.41
|View full text |Cite
|
Sign up to set email alerts
|

Scalable State-Machine Replication

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 47 publications
(29 citation statements)
references
References 35 publications
(55 reference statements)
0
28
0
1
Order By: Relevance
“…When the initial coordinator i decides to go onto the slow path, it performs an analog of Paxos Phase 2: it sends an MConsensus message with its proposal and ballot i to a slow quorum that includes itself (line 18) 3 . Following Flexible Paxos [12], the size of the slow quorum is only f + 1, rather than a majority like in classical Paxos.…”
Section: Slow Pathmentioning
confidence: 99%
See 2 more Smart Citations
“…When the initial coordinator i decides to go onto the slow path, it performs an analog of Paxos Phase 2: it sends an MConsensus message with its proposal and ballot i to a slow quorum that includes itself (line 18) 3 . Following Flexible Paxos [12], the size of the slow quorum is only f + 1, rather than a majority like in classical Paxos.…”
Section: Slow Pathmentioning
confidence: 99%
“…We compared Atlas with EPaxos in detail in §3. 3. There have been two follow-up protocols to EPaxos, Alvin [33] and Caesar [2].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, few applications can be optimally partitioned (i.e., all requests fall within a single shard and load is balanced among shards) and so the system must handle requests that span multiple shards. There are two classes of solutions when it comes to handling a multi-shard request: (a) having the involved shards coordinate and execute the request in a distributed fashion (e.g., Spanner [3], S-SMR [4]) and (b) moving the necessary state to one shard that will execute the request locally (e.g., [5]). In either case, one pitfall of sharding is that if the application state is poorly partitioned, overall system performance will most likely decrease, instead of increase, due to the overhead of multi-shard requests.…”
Section: Introductionmentioning
confidence: 99%
“…To provide a scalable design, clustering is the common approach. The nodes of the system are partitioned in multiple groups, each group working independently; the system can grow by simply adding more groups [13,41]. To achieve flexibility and handle churn, join and leave operations should be lightweight and entail small, localized changes to the system [53].…”
Section: Introductionmentioning
confidence: 99%