2015
DOI: 10.1145/2872887.2750416
|View full text |Cite
|
Sign up to set email alerts
|

Architecting to achieve a billion requests per second throughput on a single key-value store server platform

Abstract: Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(15 citation statements)
references
References 26 publications
0
15
0
Order By: Relevance
“…For this reason, we naturally assume that to execute a transaction, a client can communicate with the servers (but not with other clients) and a server communicates with a client only to respond to a client's read or write request. We find evidence of the relevance of this assumption in large-scale production systems, such as Facebook's data platform [45], and in emerging systems [23,29,37] and architectures [36] for fast query processing, where no per-client states are maintained to avoid the corresponding overheads and to achieve the lowest latency. System model.…”
Section: Model and Definitionsmentioning
confidence: 99%
“…For this reason, we naturally assume that to execute a transaction, a client can communicate with the servers (but not with other clients) and a server communicates with a client only to respond to a client's read or write request. We find evidence of the relevance of this assumption in large-scale production systems, such as Facebook's data platform [45], and in emerging systems [23,29,37] and architectures [36] for fast query processing, where no per-client states are maintained to avoid the corresponding overheads and to achieve the lowest latency. System model.…”
Section: Model and Definitionsmentioning
confidence: 99%
“…This is because a server thread of RDMA-Memcached has to coordinate with other threads for sharing data structures (e.g, LRU lists) as well as to perform network operations, which does not exhibit good scalability [5,16,23]. By using data partition, a server thread in ServerReply does not need to interact with other threads, so ServerReply is only limited by outbound RDMA operations.…”
Section: Comparison On Throughputmentioning
confidence: 99%
“…KVS such as Memcached [2], Redis [3], Dynamo [22], TAO [11], and Voldemort [5] are used in production environments of large service providers such as Facebook, Amazon, Twitter, Zynga, and LinkedIn [6,39,46,59]. The popularity of these systems has resulted in considerable research and development efforts, including open-source implementations [1], research prototypes [7,51] and a wide range of sophisticated, highly tuned frameworks that aspire to become the state-of-the-art of KVS [23,36,37].…”
Section: In-memory Key-value Storesmentioning
confidence: 99%
“…α = 0.99 is the typical data popularity distribution used in KVS research [16,23,31,36,37]. Some studies show that the popularity distribution skew in real-world datasets can be lower than that (e.g., α = 0.6 [55], α = 0.7 − 0.9 [8,53]), but also even higher (e.g., up to α = 1.01 [15,27]).…”
Section: Skew In Scale-out Architecturesmentioning
confidence: 99%
See 1 more Smart Citation