There is a large body of evidence for the potential of greater computational power using information carriers that are quantum mechanical over those governed by the laws of classical mechanics. But the question of the exact nature of the power contributed by quantum mechanics remains only partially answered. Furthermore, there exists doubt over the practicality of achieving a large enough quantum computation that definitively demonstrates quantum supremacy. Recently the study of computational problems that produce samples from probability distributions has added to both our understanding of the power of quantum algorithms and lowered the requirements for demonstration of fast quantum algorithms. The proposed quantum sampling problems do not require a quantum computer capable of universal operations and also permit physically realistic errors in their operation. This is an encouraging step towards an experimental demonstration of quantum algorithmic supremacy. In this paper, we will review sampling problems and the arguments that have been used to deduce when sampling problems are hard for classical computers to simulate. Two classes of quantum sampling problems that demonstrate the supremacy of quantum algorithms are BosonSampling and Instantaneous Quantum Polynomial-time Sampling. We will present the details of these classes and recent experimental progress towards demonstrating quantum supremacy in BosonSampling.npj Quantum Information (2017) 3:15 ; doi:10.1038/s41534-017-0018-2
INTRODUCTIONThere is a growing sense of excitement that in the near future prototype quantum computers might be able to outperform any classical computer. That is, they might demonstrate supremacy over classical devices.1 This excitement has in part been driven by theoretical research into the complexity of intermediate quantum computing models, which over the last 15 years has seen the physical requirements for a quantum speedup lowered while increasing the level of rigour in the argument for the difficulty of classically simulating such systems.These advances are rooted in the discovery by Terhal and DiVincenzo 2 that sufficiently accurate classical simulations of even quite simple quantum computations could have significant implications for the interrelationships between computational complexity classes.3 Since then the theoretical challenge has been to demonstrate such a result holds for levels of precision commensurate with what is expected from realisable quantum computers. A first step in this direction established that classical computers cannot efficiently mimic the output of ideal quantum circuits to within a reasonable multiplicative (or relative) error in the frequency with which output events occur without similarly disrupting the expected relationships between classical complexity classes.4, 5 In a major breakthrough Aaronson and Arkhipov laid out an argument for establishing that efficient classical simulation of linear optical systems was not possible, even if that simulation was only required to be accurate to within a...