2015
DOI: 10.4086/toc.2015.v011a016
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: In this note, we develop a bounded-error quantum algorithm that makes O(n 1/4 ε −1/2) queries to a function f : {0, 1} n → {0, 1}, accepts when f is monotone, and rejects when f is ε-far from being monotone. This result gives a super-quadratic improvement compared to the best known randomized algorithm for all ε = o(1). The improvement is cubic when ε = 1/ √ n.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Now let us turn to the lower bounds. Ergün et al [12] proved a lower bound 1 of Ω(log n) for testing monotonicity on the line in the so-called comparison-based model. In this model, each query of the tester may depend only on the order relations between the responses to the previous queries, but not on the values of the responses themselves.…”
Section: Introductionmentioning
confidence: 99%
“…Now let us turn to the lower bounds. Ergün et al [12] proved a lower bound 1 of Ω(log n) for testing monotonicity on the line in the so-called comparison-based model. In this model, each query of the tester may depend only on the order relations between the responses to the previous queries, but not on the values of the responses themselves.…”
Section: Introductionmentioning
confidence: 99%
“…The negative-weighted adversary bound has been used to prove lower bounds [22,19,20], but more frequently to prove upper bounds, in particular using the learning graph approach [12]. For instance, the adversary bound (sometimes in the equivalent form of span programs) was used to construct quantum algorithms for formula evaluation [55,54,61], finding subgraphs [43,18,42], k-distinctness problem [11], and in learning and property testing [14,16].…”
Section: Adversary Boundmentioning
confidence: 99%
“…where we used a r c < a c r , (1 − 2 −κ ) −1 ≤ 2 and a r 2 κ−1 ≤ 1 2 to get the first, second and third inequalities, respectively. Using the union bound over all r ∈ [d] and grouping according to n , we get 8) where the second last inequality follows from Equations (3.6) and (3.7), and the last inequality holds because if coordinate r is included in the count n , then 2 −1 < a r ≤ 2 .…”
Section: Bad Eventsmentioning
confidence: 99%
“…Testing various properties of functions, including monotonicity (see, e. g., [38,52,33,34,47,36,53,35,41,1,42,5,16,12,19,22,17,11,23,24,21,27,26,45,8,9,32,49,29,7,13,25,14,50] and recent surveys [54,55,20]), the Lipschitz property [43,22,31,17,2], bounded-derivative properties [21], linearity [18,6,10,44,56], submodularity [51,58,60,…”
Section: Introductionmentioning
confidence: 99%