2018
DOI: 10.1007/s00440-018-0860-y
|View full text |Cite
|
Sign up to set email alerts
|

The power of online thinning in reducing discrepancy

Abstract: Consider an infinite sequence of independent, uniformly chosen points from r0, 1s d . After looking at each point in the sequence, an overseer is allowed to either keep it or reject it, and this choice may depend on the locations of all previously kept points. However, the overseer must keep at least one of every two consecutive points. We call a sequence generated in this fashion a two-thinning sequence. Here, the purpose of the overseer is to control the discrepancy of the empirical distribution of points, t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 24 publications
0
16
0
Order By: Relevance
“…In contrast, the proof approach of [JKS19] completely breaks down for the Tusnády's problem even in two dimensions and does not give any better lower bounds in terms of d. We recently learned that results similar to Theorems 1.1 and 1.3 were also obtained by Dwivedi et al [DFGGR19], in the context of understanding the power of online thinning in reducing discrepancy.…”
Section: Our Discrepancy Boundsmentioning
confidence: 97%
“…In contrast, the proof approach of [JKS19] completely breaks down for the Tusnády's problem even in two dimensions and does not give any better lower bounds in terms of d. We recently learned that results similar to Theorems 1.1 and 1.3 were also obtained by Dwivedi et al [DFGGR19], in the context of understanding the power of online thinning in reducing discrepancy.…”
Section: Our Discrepancy Boundsmentioning
confidence: 97%
“…The name "two-thinning" is due to yet another point of view on this setting. According to this view, an infinite sequence of allocations has been drawn independently and uniformly at random, and the overseer is allowed to thin it on-line (i.e., delete some of the allocations depending only on the past), as long as at most one of every two consecutive entries is deleted (for a more thorough discussion of the model see joint work with Ramdas and Dwivedi [6], where the model was introduced).…”
Section: Discussionmentioning
confidence: 99%
“…This notion was recently formulated and studied by Peres, Talwar and Wieder [9], viewing it as having two-choices with probability β and no-choice with probability (1 − β), independently for every ball. Once errors of this nature are introduced to the model, two-choices and one-retry are equivalent up to a parameter change, and in lightly loaded case of ρn balls allocated into n bins, both offer no improvement over having no-choice at all (see [6] for more details).…”
Section: Discussionmentioning
confidence: 99%
“…This is a significant improvement over One-Choice, but also the total number of samples is (1+o(1))•m, which is an improvement over Two-Choice. Similar threshold processes have been studied in queuing [9], [20,Section 5] and discrepancy theory [8]. For values of m sufficiently larger than n, [11] and [18] prove some lower and upper bounds for a more general class of adaptive thinning protocols (here, adaptive means that the choice of the threshold may depend on the load configuration).…”
Section: Introductionmentioning
confidence: 86%