We prove that quantum expander codes can be combined with quantum fault-tolerance techniques to achieve constant overhead: the ratio between the total number of physical qubits required for a quantum computation with faulty hardware and the number of logical qubits involved in the ideal computation is asymptotically constant, and can even be taken arbitrarily close to 1 in the limit of small physical error rate. This improves on the polylogarithmic overhead promised by the standard threshold theorem.To achieve this, we exploit a framework introduced by Gottesman together with a family of constant rate quantum codes, quantum expander codes. Our main technical contribution is to analyze an efficient decoding algorithm for these codes and prove that it remains robust in the presence of noisy syndrome measurements, a property which is crucial for fault-tolerant circuits. We also establish two additional features of the decoding algorithm that make it attractive for quantum computation: it can be parallelized to run in logarithmic depth, and is single-shot, meaning that it only requires a single round of noisy syndrome measurement. on k qubits with |C| locations 1 , O(log log(|C|/ε)) levels of encoding are needed, which translates into a polylog(|C|/ε) space overhead. While this might seem like a reasonably small overhead, this remains rather prohibitive in practice, and more importantly, it raises the question of whether this value is optimal. In this paper, we consider a realistic model for quantum computing where the quantum gates are noisy, but all classical computation is assumed to be fast and error-free. Note that if classical gates are also noisy, then it is known that classical fault-tolerance cannot be obtained with constant overhead [20,10].In a breakthrough paper, Gottesman has shown that the polylogarithmic overhead was maybe not necessary after all, and that polynomial-time computations could be performed with a noisy circuit with only a constant overhead [14]. In fact, the constant can even be taken arbitrarily close to 1 provided that the physical error is sufficiently rate small. In order to overcome the polylogarithmic barrier, Gottesman suggested to use quantum error correcting codes with constant rate. More precisely, the idea is to encode the logical qubits in large blocks, but still of size sublinear in k. The encoding can still be made fault-tolerant thanks to concatenation, but this only yields an overhead polylogarithmic in the block size, and choosing a sufficiently small block size to a sub-linear overhead for encoding. Gates are then performed with Knill's technique by teleporting the appropriate encoded states. Overall, apart from the initial preparation and final measurement, the encoded circuit alternates between steps applying a gate of the original circuit on the encoded state with Knill's technique, and steps of error correction for the quantum code consisting of a measurement of the syndrome, running a (sufficiently fast) classical decoding algorithm and applying the necessary correc...