We give a classical analogue to Kerenidis and Prakash's quantum recommendation system, previously believed to be one of the strongest candidates for provably exponential speedups in quantum machine learning. Our main result is an algorithm that, given an m × n matrix in a data structure supporting certain ℓ 2 -norm sampling operations, outputs an ℓ 2 -norm sample from a rank-k approximation of that matrix in time O(poly(k) log(mn)), only polynomially slower than the quantum algorithm. As a consequence, Kerenidis and Prakash's algorithm does not in fact give an exponential speedup over classical algorithms. Further, under strong input assumptions, the classical recommendation system resulting from our algorithm produces recommendations exponentially faster than previous classical systems, which run in time linear in m and n.The main insight of this work is the use of simple routines to manipulate ℓ 2 -norm sampling distributions, which play the role of quantum superpositions in the classical setting. This correspondence indicates a potentially fruitful framework for formally comparing quantum machine learning algorithms to classical machine learning algorithms.
We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tang's breakthrough quantum-inspired algorithm for recommendation systems [STOC'19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gilyén et al. [STOC'19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffices to generalize all recent results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: ℓ 2-norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive.
We study the problem of learning a Hamiltonian H to precision ε, supposing we are given copies of its Gibbs state ρ = exp(−βH)/ Tr(exp(−βH)) at a known inverse temperature β. Anshu, Arunachalam, Kuwahara, and Soleimanifar [AAKS21] recently studied the sample complexity (number of copies of ρ needed) of this problem for geometrically local N -qubit Hamiltonians. In the high-temperature (low β) regime, their algorithm has sample complexity poly(N, 1/β, 1/ε) and can be implemented with polynomial, but suboptimal, time complexity.In this paper, we study the same question for a more general class of Hamiltonians. We show how to learn the coefficients of a Hamiltonian to error ε with sample complexity S = O(log N/(βε) 2 ) and time complexity linear in the sample size, O(SN ). Furthermore, we prove a matching lower bound showing that our algorithm's sample complexity is optimal, and hence our time complexity is also optimal.In the appendix, we show that virtually the same algorithm can be used to learn H from a real-time evolution unitary e −itH in a small t regime with similar sample and time complexity.
We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18], when the input matrix A is stored in a data structure applicable for QRAM-based state preparation.Namely, suppose we are given an A∈Cm×n with minimum non-zero singular value σ which supports certain efficient ℓ2-norm importance sampling queries, along with a b∈Cm. Then, for some x∈Cn satisfying ‖x–A+b‖≤ε‖A+b‖, we can output a measurement of |x⟩ in the computational basis and output an entry of x with classical algorithms that run in O~(‖A‖F6‖A‖6σ12ε4) and O~(‖A‖F6‖A‖2σ8ε4) time, respectively. This improves on previous "quantum-inspired" algorithms in this line of research by at least a factor of ‖A‖16σ16ε2 [Chia, Gilyén, Li, Lin, Tang, and Wang, STOC'20]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.