Gate set tomography (GST) is a protocol for detailed, predictive characterization of logic operations (gates) on quantum computing processors. Early versions of GST emerged around 2012-13, and since then it has been refined, demonstrated, and used in a large number of experiments. This paper presents the foundations of GST in comprehensive detail. The most important feature of GST, compared to older state and process tomography protocols, is that it is calibration-free. GST does not rely on pre-calibrated state preparations and measurements. Instead, it characterizes all the operations in a gate set simultaneously and self-consistently, relative to each other. Long sequence GST can estimate gates with very high precision and efficiency, achieving Heisenberg scaling in regimes of practical interest. In this paper, we cover GST's intellectual history, the techniques and experiments used to achieve its intended purpose, data analysis, gauge freedom and fixing, error bars, and the interpretation of gauge-fixed estimates of gate sets. Our focus is fundamental mathematical aspects of GST, rather than implementation details, but we touch on some of the foundational algorithmic tricks used in the pyGSTi implementation.
Quantum tomography is used to characterize quantum operations implemented in quantum information processing (QIP) hardware. Traditionally, state tomography has been used to characterize the quantum state prepared in an initialization procedure, while quantum process tomography is used to characterize dynamical operations on a QIP system. As such, tomography is critical to the development of QIP hardware (since it is necessary both for debugging and validating as-built devices, and its results are used to influence the next generation of devices). But tomography su↵ers from several critical drawbacks. In this report, we present new research that resolves several of these flaws. We describe a new form of tomography called gate set tomography (GST), which unifies state and process tomography, avoids prior methods critical reliance on precalibrated operations that are not generally available, and can achieve unprecedented accuracies. We report on theory and experimental development of adaptive tomography protocols that achieve far higher fidelity in state reconstruction than non-adaptive methods. Finally, we present a new theoretical and experimental analysis of process tomography on multispin systems, and demonstrate how to more e↵ectively detect and characterize quantum noise using carefully tailored ensembles of input states.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality, meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of local asymptotic normality, metric-projected local asymptotic normality, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.
Quantum computing systems need to be benchmarked in terms of practical tasks they would be expected to do. Here, we propose 3 "application-motivated" circuit classes for benchmarking: deep (relevant for state preparation in the variational quantum eigensolver algorithm), shallow (inspired by IQP-type circuits that might be useful for near-term quantum machine learning), and square (inspired by the quantum volume benchmark). We quantify the performance of a quantum computing system in running circuits from these classes using several figures of merit, all of which require exponential classical computing resources and a polynomial number of classical samples (bitstrings) from the system. We study how performance varies with the compilation strategy used and the device on which the circuit is run. Using systems made available by IBM Quantum, we examine their performance, showing that noise-aware compilation strategies may be beneficial, and that device connectivity and noise levels play a crucial role in the performance of the system according to our benchmarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.