We prove that quantum expander codes can be combined with quantum fault-tolerance techniques to achieve constant overhead: the ratio between the total number of physical qubits required for a quantum computation with faulty hardware and the number of logical qubits involved in the ideal computation is asymptotically constant, and can even be taken arbitrarily close to 1 in the limit of small physical error rate. This improves on the polylogarithmic overhead promised by the standard threshold theorem.To achieve this, we exploit a framework introduced by Gottesman together with a family of constant rate quantum codes, quantum expander codes. Our main technical contribution is to analyze an efficient decoding algorithm for these codes and prove that it remains robust in the presence of noisy syndrome measurements, a property which is crucial for fault-tolerant circuits. We also establish two additional features of the decoding algorithm that make it attractive for quantum computation: it can be parallelized to run in logarithmic depth, and is single-shot, meaning that it only requires a single round of noisy syndrome measurement. on k qubits with |C| locations 1 , O(log log(|C|/ε)) levels of encoding are needed, which translates into a polylog(|C|/ε) space overhead. While this might seem like a reasonably small overhead, this remains rather prohibitive in practice, and more importantly, it raises the question of whether this value is optimal. In this paper, we consider a realistic model for quantum computing where the quantum gates are noisy, but all classical computation is assumed to be fast and error-free. Note that if classical gates are also noisy, then it is known that classical fault-tolerance cannot be obtained with constant overhead [20,10].In a breakthrough paper, Gottesman has shown that the polylogarithmic overhead was maybe not necessary after all, and that polynomial-time computations could be performed with a noisy circuit with only a constant overhead [14]. In fact, the constant can even be taken arbitrarily close to 1 provided that the physical error is sufficiently rate small. In order to overcome the polylogarithmic barrier, Gottesman suggested to use quantum error correcting codes with constant rate. More precisely, the idea is to encode the logical qubits in large blocks, but still of size sublinear in k. The encoding can still be made fault-tolerant thanks to concatenation, but this only yields an overhead polylogarithmic in the block size, and choosing a sufficiently small block size to a sub-linear overhead for encoding. Gates are then performed with Knill's technique by teleporting the appropriate encoded states. Overall, apart from the initial preparation and final measurement, the encoded circuit alternates between steps applying a gate of the original circuit on the encoded state with Knill's technique, and steps of error correction for the quantum code consisting of a measurement of the syndrome, running a (sufficiently fast) classical decoding algorithm and applying the necessary correc...
We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman's construction of fault tolerant schemes with constant space overhead.In order to obtain this result, we study a notion of α-percolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected α-subset of E, where X is an α-subset of E if |X ∩ E| ≥ α|X|.
The threshold theorem is a seminal result in the field of quantum computing asserting that arbitrarily long quantum computations can be performed on a faulty quantum computer provided that the noise level is below some constant threshold. This remarkable result comes at the price of increasing the number of qubits (quantum bits) by a large factor that scales polylogarithmically with the size of the quantum computation we wish to realize. Minimizing the space overhead for fault-tolerant quantum computation is a pressing challenge that is crucial to benefit from the computational potential of quantum devices. In this paper, we study the asymptotic scaling of the space overhead needed for fault-tolerant quantum computation. We show that the polylogarithmic factor in the standard threshold theorem is in fact not needed and that there is a fault-tolerant construction that uses a number of qubits that is only a constant factor more than the number of qubits of the ideal computation. This result was conjectured by Gottesman who suggested to replace the concatenated codes from the standard threshold theorem by quantum error-correcting codes with a constant encoding rate. The main challenge was then to find an appropriate family of quantum codes together with an efficient classical decoding algorithm working even with a noisy syndrome. The efficiency constraint is crucial here: bear in mind that qubits are inherently noisy and that faults keep accumulating during the decoding process. The role of the decoder is therefore to keep the number of errors under control during the whole computation. On a technical level, our main contribution is the analysis of the SMALL-SET-FLIP decoding algorithm applied to the family of quantum expander codes . We show that it can be parallelized to run in constant time while correcting sufficiently many errors on both the qubits and the syndrome to keep the error under control. These tools can be seen as a quantum generalization of the BIT-FLIP algorithm applied to the (classical) expander codes of Sipser and Spielman.
Hypergraph product codes are a class of constant-rate quantum low-density parity-check (LDPC) codes equipped with a linear-time decoder called small-set-flip (SSF). This decoder displays sub-optimal performance in practice and requires very large error correcting codes to be effective. In this work, we present new hybrid decoders that combine the belief propagation (BP) algorithm with the SSF decoder. We present the results of numerical simulations when codes are subject to independent bit-flip and phase-flip errors. We provide evidence that the threshold of these codes is roughly 7.5% assuming an ideal syndrome extraction, and remains close to 3% in the presence of syndrome noise. This result subsumes and significantly improves upon an earlier work by Grospellier and Krishna (arXiv:1810.03681). The low-complexity high-performance of these heuristic decoders suggests that decoding should not be a substantial difficulty when moving from zero-rate surface codes to constant-rate LDPC codes and gives a further hint that such codes are well-worth investigating in the context of building large universal quantum computers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.