2013 IEEE 29th International Conference on Data Engineering (ICDE) 2013
DOI: 10.1109/icde.2013.6544810
|View full text |Cite
|
Sign up to set email alerts
|

CPU and cache efficient management of memory-resident databases

Abstract: Abstract-Memory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current implementations, bandwidth savings achieved by partial decomposition come at increased CPU costs. To achieve the aspired bandwidth savings without sacrificing CPU efficiency, we combine partially decomposed storage with Just-in-T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(17 citation statements)
references
References 23 publications
0
17
0
Order By: Relevance
“…Particularly, all previous studies assign homogeneous workloads to the CPU and the GPU. From these observations, the GPU is severely degraded by the memory stalls, which are usually a major performance factor for databases [27,19]. Homogeneous workload distribution still causes excessive memory stalls on the GPU, despite the fine-grained and collaborative improvements in the previous studies [19,38,21,7].…”
Section: Motivationsmentioning
confidence: 99%
“…Particularly, all previous studies assign homogeneous workloads to the CPU and the GPU. From these observations, the GPU is severely degraded by the memory stalls, which are usually a major performance factor for databases [27,19]. Homogeneous workload distribution still causes excessive memory stalls on the GPU, despite the fine-grained and collaborative improvements in the previous studies [19,38,21,7].…”
Section: Motivationsmentioning
confidence: 99%
“…Furthermore, the generic cost model introduced by Manegold et al [13] allows us to model the cache accesses for other relational operators such as joins or sorts by combining atomic access patterns. Figure 2 shows L3 accesses for an increasing selectivity described by Pirk et al [17]. The main reason for this behavior is the high number of random misses for small selectivities.…”
Section: Cache Cost Modelmentioning
confidence: 99%
“…Each subsequent predicate introduces a sequential scan with conditional read pattern which induces cache accesses depending on the selectivity of the previous predicate. We refer to Pirk et al [17] for a detailed description of this model. Furthermore, the generic cost model introduced by Manegold et al [13] allows us to model the cache accesses for other relational operators such as joins or sorts by combining atomic access patterns.…”
Section: Cache Cost Modelmentioning
confidence: 99%
See 2 more Smart Citations