2019
DOI: 10.22331/q-2019-09-02-183
|View full text |Cite
|
Sign up to set email alerts
|

Quantum error correction for the toric code using deep reinforcement learning

Abstract: We implement a quantum error correction algorithm for bit-flip errors on the topological toric code using deep reinforcement learning. An action-value Q-function encodes the discounted value of moving a defect to a neighboring site on the square grid (the action) depending on the full set of defects on the torus (the syndrome or state). The Q-function is represented by a deep convolutional neural network. Using the translational invariance on the torus allows for viewing each defect from a central perspective … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
82
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 90 publications
(84 citation statements)
references
References 46 publications
2
82
0
Order By: Relevance
“…The decoder will succeed if the probability of spin and phase flip errors between the projective measurements is lower than a given threshold. This threshold ranges anywhere between 2%-11% depending on the chosen error model [42][43][44][45][46][49][50][51].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The decoder will succeed if the probability of spin and phase flip errors between the projective measurements is lower than a given threshold. This threshold ranges anywhere between 2%-11% depending on the chosen error model [42][43][44][45][46][49][50][51].…”
Section: Resultsmentioning
confidence: 99%
“…In an alternative approach to using the toric code, one arrives at a ground state without the need to engineer any Hamiltonian, but an arbitrary state is successively projected into the desired state by a series of projective measurements [40,41]. The performance of these methods depends on the frequency, correlations and types of errors [42][43][44][45][46] and we comment below how our algorithm relates these approaches.…”
Section: Introductionmentioning
confidence: 99%
“…The authors would like to thank Arun B. Aloshious for valuable discussions. During the preparation of this manuscript, five related preprints were made available [34][35][36][37][38], however their scope and emphasis are different from our work. This work was completed when CC was associated with Indian Institute of Technology Madras as a part of his Dual Degree thesis.…”
Section: Acknowledgementsmentioning
confidence: 99%
“…The search for optimal control can naturally be formulated as reinforcement learning (RL) [11][12][13][14][15][16][17][18][19], a discipline of machine learning. RL has been used in the context of quantum control [17], to design experiments in quantum optics [20], and to automatically generate sequences of gates and measurements for quantum error correction [16,21,22].…”
Section: Introductionmentioning
confidence: 99%