2014
DOI: 10.14778/2735508.2735511
|View full text |Cite
|
Sign up to set email alerts
|

Staring into the abyss

Abstract: Computer architectures are moving towards an era dominated by many-core machines with dozens or even hundreds of cores on a single chip. This unprecedented level of on-chip parallelism introduces a new dimension to scalability that current database management systems (DBMSs) were not designed for. In particular, as the number of cores increases, the problem of concurrency control becomes extremely challenging. With hundreds of threads running in parallel, the complexity of coordinating competing accesses to da… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
18
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 174 publications
(18 citation statements)
references
References 35 publications
0
18
0
Order By: Relevance
“…Notes Centralized (deterministic) H-Store [13] two-orders of magnitude YCSB Multi-partition workload Distributed (deterministic) Calvin [18] 22× YCSB Low-contention workload (Uniform access) Centralized (non-deterministic) Cicada [16], TicToc [25], Foudus [15], Ermia [14], Silo [20], 2PL-NoWait [24] 3× TPC-C High-contention workload (1 warehouse) Table 2. Experimental results using TPC-C and YCSB for the centralized implementation of queue-oriented paradigm [17], and a distributed deterministic database.…”
Section: Macrobenchmarkmentioning
confidence: 99%
“…Notes Centralized (deterministic) H-Store [13] two-orders of magnitude YCSB Multi-partition workload Distributed (deterministic) Calvin [18] 22× YCSB Low-contention workload (Uniform access) Centralized (non-deterministic) Cicada [16], TicToc [25], Foudus [15], Ermia [14], Silo [20], 2PL-NoWait [24] 3× TPC-C High-contention workload (1 warehouse) Table 2. Experimental results using TPC-C and YCSB for the centralized implementation of queue-oriented paradigm [17], and a distributed deterministic database.…”
Section: Macrobenchmarkmentioning
confidence: 99%
“…
Current main memory database system architectures are still challenged by high contention workloads and this challenge will continue to grow as the number of cores in processors continues to increase [35]. These systems schedule transactions randomly across cores to maximize concurrency and to produce a uniform load across cores.
…”
mentioning
confidence: 99%
“…Our results show that, with appropriate settings, intelligent scheduling can increase throughput by 54% and reduce abort rate by 80% on a 20-core machine, relative to random scheduling. In summary, the paper provides preliminary evidence that intelligent scheduling significantly improves DBMS performance.Transaction aborts are one of the main sources of performance loss in main memory OLTP systems [35].Current architectures for OLTP DBMS use random scheduling to assign transactions to threads. Random scheduling achieves uniform load across CPU cores and keeps all cores occupied.…”
mentioning
confidence: 99%
“…[5][6][7][8] Most of them are based on the notion of late materialization. Yu et al 10 presented evidences that many-core chip machines require a completely redesigned DBS, which should be modern-hardware aware. Thus, they yield smaller join partial results, decreasing the amount of write operations on secondary memory.…”
mentioning
confidence: 99%
“…However, that premise only holds for a small number of CPU cores in a node. Yu et al 10 presented evidences that many-core chip machines require a completely redesigned DBS, which should be modern-hardware aware. In fact, there are some join operators already which take profit of running on modern-hardware machines.…”
mentioning
confidence: 99%