We consider quantum computations comprising only commuting gates, known as IQP computations, and provide compelling evidence that the task of sampling their output probability distributions is unlikely to be achievable by any efficient classical means. More specifically, we introduce the class post-IQP of languages decided with bounded error by uniform families of IQP circuits with post-selection, and prove first that post-IQP equals the classical class PP. Using this result we show that if the output distributions of uniform IQP circuit families could be classically efficiently sampled, either exactly in total variation distance or even approximately up to 41 per cent multiplicative error in the probabilities, then the infinite tower of classical complexity classes known as the polynomial hierarchy would collapse to its third level. We mention some further results on the classical simulation properties of IQP circuit families, in particular showing that if the output distribution results from measurements on only O(log n) lines then it may, in fact, be classically efficiently sampled.
We use the class of commuting quantum computations known as IQP (Instantaneous Quantum Polynomial time) to strengthen the conjecture that quantum computers are hard to simulate classically. We show that, if either of two plausible average-case hardness conjectures holds, then IQP computations are hard to simulate classically up to constant additive error. One conjecture relates to the hardness of estimating the complextemperature partition function for random instances of the Ising model; the other concerns approximating the number of zeroes of random low-degree polynomials. We observe that both conjectures can be shown to be valid in the setting of worst-case complexity. We arrive at these conjectures by deriving spin-based generalisations of the Boson Sampling problem that avoid the so-called permanent anticoncentration conjecture.Quantum computers are conjectured to outperform classical computers for a variety of important tasks ranging from integer factorisation [1] to the simulation of quantum mechanics [2]. However, to date there is relatively little rigorous evidence for this conjecture. It is well established that quantum computers can yield an exponential advantage in the query and communication complexity models. But in the more physically meaningful model of time complexity, there are no proven separations known between quantum and classical computation.This can be seen as a consequence of the extreme difficulty of proving bounds on the power of classical computing models, such as the famous P vs. NP problem. Given this difficulty, the most we can reasonably hope for is to show that quantum computations cannot be simulated efficiently classically, assuming some widely believed complexity-theoretic conjecture. For example, any set of quantum circuits that can implement Shor's algorithm [1] provides a canonical example, with the unlikely consequence of efficient classical simulation of this class of quantum circuits being the existence of an efficient classical factoring algorithm. However, one could hope for the existence of other examples that have wider-reaching complexity-theoretic consequences.With this in mind, in both [3] and [4] it was shown that the existence of an efficient classical sampler from a distribution that is close to the output distribution of an arbitrary quantum circuit, to within a small multiplicative error in each output probability, would imply that post-selected classical computation is equivalent to post-selected quantum computation. This consequence is considered very unlikely as it would collapse the infinite tower of complexity classes known as the Polynomial Hierarchy [5] to its third level. In both works this was proven even for non-universal quantum circuit families: commuting quantum circuits in the case of [3], and linear-optical networks in [4]. These non-universal families are of physical interest because they are simpler to implement, and easier to analyse because of the elegant mathematical structures on which they are based. * michael.bremner@uts.edu.au Unfortu...
We examine theoretic architectures and an abstract model for a restricted class of quantum computation, called here temporally unstructured ('instantaneous') quantum computation because it allows for essentially no temporal structure within the quantum dynamics. Using the theory of binary matroids, we argue that the paradigm is rich enough to enable sampling from probability distributions that cannot, classically, be sampled efficiently and accurately. This paradigm also admits simple interactive proof games that may convince a sceptic of the existence of truly quantum effects. Furthermore, these effects can be created using significantly fewer qubits than are required for running Shor's algorithm.
Bipartite entanglement is one of the fundamental quantifiable resources of quantum information theory. We propose a new application of this resource to the theory of quantum measurements. According to Naimark's theorem any rank 1 generalised measurement (POVM) M may be represented as a von Neumann measurement in an extended (tensor product) space of the system plus ancilla. By considering a suitable average of the entanglements of these measurement directions and minimising over all Naimark extensions, we define a notion of entanglement cost E_{\min}(M) of M. We give a constructive means of characterising all Naimark extensions of a given POVM. We identify various classes of POVMs with zero and non-zero cost and explicitly characterise all POVMs in 2 dimensions having zero cost. We prove a constant upper bound on the entanglement cost of any POVM in any dimension. Hence the asymptotic entanglement cost (i.e. the large n limit of the cost of n applications of M, divided by n) is zero for all POVMs. The trine measurement is defined by three rank 1 elements, with directions symmetrically placed around a great circle on the Bloch sphere. We give an analytic expression for its entanglement cost. Defining a normalised cost of any $d$-dimensional POVM by E_{\min} (M)/\log_2 d, we show (using a combination of analytic and numerical techniques) that the trine measurement is more costly than any other POVM with d>2, or with d=2 and ancilla dimension 2. This strongly suggests that the trine measurement is the most costly of all POVMs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.