2001
DOI: 10.1142/s021819590100064x
|View full text |Cite
|
Sign up to set email alerts
|

The Shuffling Buffer

Abstract: The complexity of randomized incremental algorithms is analyzed with the assumption of a random order of the input. To guarantee this hypothesis, the n data have to be known in advance in order to be mixed what contradicts with the on-line nature of the algorithm. We present the shuffling buffer technique to introduce sufficient randomness to guarantee an improvement on the worst case complexity by knowing only k data in advance. Typically, an algorithm with O(n2) worst-case complexity and O(n) or O(n log n) r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2003
2003
2009
2009

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Devillers and Guigue [15] considered a different kind of partially randomized insertion order, for handling constructions for which the data is provided sequentially rather than all at once. Arriving data can be stored and reshuffled (randomly) in a buffer of limited size before it has to be inserted into the data structure.…”
Section: Discussionmentioning
confidence: 99%
“…Devillers and Guigue [15] considered a different kind of partially randomized insertion order, for handling constructions for which the data is provided sequentially rather than all at once. Arriving data can be stored and reshuffled (randomly) in a buffer of limited size before it has to be inserted into the data structure.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, in real-life applications the curves are typically inserted to the arrangement in nonrandom order. This reduces the performance of the RIC algorithm, as it relies on random order of insertion, unless special procedures are followed [Devillers and Guigue 2001]. The basic idea behind the landmarks algorithm is to choose and locate points (landmarks) within the arrangement and store them in a data structure that supports nearest-neighbor search.…”
Section: Point Location With Landmarksmentioning
confidence: 99%
“…On the other hand, Amenta et al showed that the entropy may slowly decay during the RIC without penalty [1]; in other words, the insertion sequence can afford to be less and less random as the construction progresses. Devillers and Guigue introduced the shuffling buffer, which randomly permutes contiguous subsequences of the input sequence of a certain length k, and they provide trade-offs between the length k and the running time of the RIC [16].…”
Section: Introductionmentioning
confidence: 99%