2017
DOI: 10.1145/3154292
|View full text |Cite
|
Sign up to set email alerts
|

Corrigendum to “The IX Operating System: Combining Low Latency, High Throughput and Efficiency in a Protected Dataplane”

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(46 citation statements)
references
References 0 publications
0
45
0
1
Order By: Relevance
“…is focus is done at the expense of a description of some more advanced research concepts. For example, the text does not discuss recursive virtual machines [33,158], the use of virtualization hardware for purposes other than running traditional virtual machines [24,29,31,43,88], or the emerging question of architectural support for containers such as Docker [129].…”
Section: Organization Of This Bookmentioning
confidence: 99%
See 1 more Smart Citation
“…is focus is done at the expense of a description of some more advanced research concepts. For example, the text does not discuss recursive virtual machines [33,158], the use of virtualization hardware for purposes other than running traditional virtual machines [24,29,31,43,88], or the emerging question of architectural support for containers such as Docker [129].…”
Section: Organization Of This Bookmentioning
confidence: 99%
“…In 2005, he co-founded Nuova Systems, a hardware company premised on providing architectural support for virtualization in the network and the I/O subsystem, which became the core of Cisco's Data Center strategy. More recently, having returned to academia as a professor at École polytechnique fédérale de Lausanne (EPFL), Edouard is now involved in the IX project [30,31,147] which leverages virtualization hardware and the Dune framework [29] to build specialized operating systems.…”
Section: Authors' Perspectivesmentioning
confidence: 99%
“…The former can efficiently schedule the resources of a multi-core server and prioritize latency-sensitive tasks [8] but suffers from high overheads for µs-scale tasks. The latter improves throughput substantially (by up to 6× for key-value stores [5]) through sweeping simplifications such as separation of control from the dataplane execution, polling, run-to-completion, and synchronization-free, flow-consistent mapping of requests to cores [5,26,27,39,42,51].…”
Section: Introductionmentioning
confidence: 99%
“…These sweeping simplifications lead to two related forms of inefficiencies: (a) the dataplane is not a work conserving scheduler, i.e., a core may be idle while there are pending requests; and (b) the dataplane suffers from head-of-line blocking, i.e., a request may be blocked until the previous tasks complete execution. While these limitations might be acceptable to workloads with near-deterministic task execution time and relatively loose SLO (e.g., some widely-studied memcached workloads [1,43] with an SLO at > 100× the mean service time [5]), such assumptions break down when considering more complex workloads, e.g., in-memory transaction processing with a TPC-C-like mix of requests or with more aggressive SLO targets.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation