PennyLane is a Python 3 software framework for optimization and machine learning of quantum and hybrid quantumclassical computations. The library provides a unified architecture for near-term quantum computing devices, supporting both qubit and continuous-variable paradigms. PennyLane's core feature is the ability to compute gradients of variational quantum circuits in a way that is compatible with classical techniques such as backpropagation. PennyLane thus extends the automatic differentiation algorithms common in optimization and machine learning to include quantum and hybrid computations. A plugin system makes the framework compatible with any gate-based quantum simulator or hardware.We provide plugins for Strawberry Fields, Rigetti Forest, Qiskit, and ProjectQ, allowing PennyLane optimizations to be run on publicly accessible quantum devices provided by Rigetti and IBM Q. On the classical front, PennyLane interfaces with accelerated machine learning libraries such as TensorFlow, PyTorch, and autograd. PennyLane can be used for the optimization of variational quantum eigensolvers, quantum approximate optimization, quantum machine learning models, and many other applications.
Achieving near-term quantum advantage will require effective methods for mitigating hardware noise. Data-driven approaches to error mitigation are promising, with popular examples including zero-noise extrapolation (ZNE) and Clifford data regression (CDR). Here we propose a novel, scalable error mitigation method that conceptually unifies ZNE and CDR. Our approach, called variable-noise Clifford data regression (vnCDR), significantly outperforms these individual methods in numerical benchmarks. vnCDR generates training data first via near-Clifford circuits (which are classically simulable) and second by varying the noise levels in these circuits. We employ a noise model obtained from IBM's Ourense quantum computer to benchmark our method. For the problem of estimating the energy of an 8-qubit Ising model system, vnCDR improves the absolute energy error by a factor of 33 over the unmitigated results and by factors 20 and 1.8 over ZNE and CDR, respectively. For the problem of correcting observables from random quantum circuits with 64 qubits, vnCDR improves the error by factors of 2.7 and 1.5 over ZNE and CDR, respectively.
Variational Quantum Algorithms (VQAs) are a promising approach for practical applications like chemistry and materials science on near-term quantum computers as they typically reduce quantum resource requirements. However, in order to implement VQAs, an efficient classical optimization strategy is required. Here we present a new stochastic gradient descent method using an adaptive number of shots at each step, called the global Coupled Adaptive Number of Shots (gCANS) method, which improves on prior art in both the number of iterations as well as the number of shots required. These improvements reduce both the time and money required to run VQAs on current cloud platforms. We analytically prove that in a convex setting gCANS achieves geometric convergence to the optimum. Further, we numerically investigate the performance of gCANS on some chemical configuration problems. We also consider finding the ground state for an Ising model with different numbers of spins to examine the scaling of the method. We find that for these problems, gCANS compares favorably to all of the other optimizers we consider.
We propose a new method to extend the size of a quantum computation beyond the number of physical qubits available on a single device. This is accomplished by randomly inserting measure-and-prepare channels to express the output state of a large circuit as a separable state across distinct devices. Our method employs randomized measurements, resulting in a sample overhead that is O~(4k/ε2), where ε is the accuracy of the computation and k the number of parallel wires that are "cut" to obtain smaller sub-circuits. We also show an information-theoretic lower bound of Ω(2k/ε2) for any comparable procedure. We use our techniques to show that circuits in the Quantum Approximate Optimization Algorithm (QAOA) with p entangling layers can be simulated by circuits on a fraction of the original number of qubits with an overhead that is roughly 2O(pκ), where κ is the size of a known balanced vertex separator of the graph which encodes the optimization problem. We obtain numerical evidence of practical speedups using our method applied to the QAOA, compared to prior work. Finally, we investigate the practical feasibility of applying the circuit cutting procedure to large-scale QAOA problems on clustered graphs by using a 30-qubit simulator to evaluate the variational energy of a 129-qubit problem as well as carry out a 62-qubit optimization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.