Proceedings of the 2016 International Conference on Management of Data 2016
DOI: 10.1145/2882903.2912569
|View full text |Cite
|
Sign up to set email alerts
|

Design Tradeoffs of Data Access Methods

Abstract: Database researchers and practitioners have been building methods to store, access, and update data for more than five decades. Designing access methods has been a constant effort to adapt to the ever-changing underlying hardware and workload requirements. The recent explosion in data system designs-including, in addition to traditional SQL systems, NoSQL, NewSQL, and other relational and nonrelational systems-makes understanding the tradeoffs of designing access methods more important than ever. Access method… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 78 publications
0
7
0
Order By: Relevance
“…What still holds true over all novel (existing or future) machine architectures and memory types is the fact that random seek operations incur a different cost than sequential read operations. As echoed by the authors of the RUM conjecture [9], ".. in the 1970s one of the critical aspects of every database algorithm was to minimize the number of random accesses on disk; fast-forward 40 years and a similar strategy is still used, only now we minimize the number of random accesses to main memory ". The use of compression to minimize memory requirements and speedup retrieval of large result sets incurs additional costs in compression and decompression times.…”
Section: Cost Modelmentioning
confidence: 99%
“…What still holds true over all novel (existing or future) machine architectures and memory types is the fact that random seek operations incur a different cost than sequential read operations. As echoed by the authors of the RUM conjecture [9], ".. in the 1970s one of the critical aspects of every database algorithm was to minimize the number of random accesses on disk; fast-forward 40 years and a similar strategy is still used, only now we minimize the number of random accesses to main memory ". The use of compression to minimize memory requirements and speedup retrieval of large result sets incurs additional costs in compression and decompression times.…”
Section: Cost Modelmentioning
confidence: 99%
“…Trade efficiency reliability can be tested by using approximate data structures such as Bloom Filter [45], HyperLogLog [80], Count Min Sketch [81], Sparse Indexes such as ZoneMaps [82], Column Imprints [83] and many others. Choosing which overhead(s) to streamline for and to what degree, stays a noticeable piece of the way toward planning another access method, particularly as hardware and workload changes after some time [84].…”
Section: Rum Conjecturementioning
confidence: 99%
“…To navigate this tradeoff, we provide a cost model that helps a DBA pick a "good" error threshold when creating a FITing-Tree. At a high level, there are two main objectives that a DBA can optimize: performance (i.e., lookup latency) and space consumption [8,15]. Therefore, we present two ways to apply our cost model that help a DBA choose an error threshold.…”
Section: Cost Modelmentioning
confidence: 99%