Proceedings 2004 VLDB Conference 2004
DOI: 10.1016/b978-012088469-8.50059-0
|View full text |Cite
|
Sign up to set email alerts
|

STEPS Towards Cache-Resident Transaction Processing

Abstract: Online transaction processing (OLTP) is a multibillion dollar industry with high-end database servers employing state-of-the-art processors to maximize performance. Unfortunately, recent studies show that CPUs are far from realizing their maximum intended throughput because of delays in the processor caches. When running OLTP, instruction-related delays in the memory subsystem account for 25 to 40% of the total execution time. In contrast to data, instruction misses cannot be overlapped with out-of-order execu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(18 citation statements)
references
References 13 publications
0
18
0
Order By: Relevance
“…About 88% of instructions, on average, can be fetched together, making a shared fetch mechanism very promising. Previous work also observed a similar phenomenon for server workloads [32] and transaction processing in database systems [33].…”
Section: Instruction Redundancymentioning
confidence: 53%
See 1 more Smart Citation
“…About 88% of instructions, on average, can be fetched together, making a shared fetch mechanism very promising. Previous work also observed a similar phenomenon for server workloads [32] and transaction processing in database systems [33].…”
Section: Instruction Redundancymentioning
confidence: 53%
“…Others have proposed using compiler hints to find reuse [28,29,30,31]. Inter-thread instruction sharing has also been exploited to reduce cache misses [32,33,34]. Our work can be used in conjunction with these intra-thread techniques.…”
Section: Related Workmentioning
confidence: 88%
“…But unlike the reliability examples in Section 5, both of these ideas are alternately very amenable to software support similar to, for example, Cohort Scheduling [20], SEDA [44], and STEPS [16]. All of these alternate projects use additional software complexity to create and exploit dynamic heterogeneity.…”
Section: Hardware or Software Support?mentioning
confidence: 99%
“…Hardavellas et al [9] noticed this problem and pointed out that DB systems must optimize for locality in high-level caches (such as an L1 cache). STEPS [10] improves the L1 instruction-cache performance by scheduling threads in OLTP workloads, but improving the data locality remains a problem for high cache levels. CARIC-DA focuses on the private-cache levels (L1 and L2 caches), which are closer to the processor than the LLC.…”
Section: Optimizing Dbmss On Multicore Platformsmentioning
confidence: 99%
“…These changes in cache levels indicate that it is increasingly important to bring data beyond the LLC and closer to L1. Hardavellas et al [9] proposed STEPS [10], which is a transactioncoordinating mechanism that minimizes instruction misses in the L1 cache based on the StagedDB design. However, reducing the data misses in higher cache levels is still a major challenge.…”
Section: Introductionmentioning
confidence: 99%