2021
DOI: 10.1103/prxquantum.2.040324
|View full text |Cite
|
Sign up to set email alerts
|

Experimental Deep Reinforcement Learning for Error-Robust Gate-Set Design on a Superconducting Quantum Computer

Abstract: Quantum computers promise tremendous impact across applications -and have shown great strides in hardware engineering -but remain notoriously error prone. Careful design of low-level controls has been shown to compensate for the processes which induce hardware errors, leveraging techniques from optimal and robust control. However, these techniques rely heavily on the availability of highly accurate and detailed physical models which generally only achieve sufficient representative fidelity for the most simple … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
68
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(68 citation statements)
references
References 64 publications
0
68
0
Order By: Relevance
“…The work [60] proposes a black-box approach to designing an error-robust universal quantum gate set using a deep reinforcement learning (DRL) model, as shown in the inset of Fig. 7(a).…”
Section: B Reinforcement Learning For Error-robust Gate Set Designmentioning
confidence: 99%
“…The work [60] proposes a black-box approach to designing an error-robust universal quantum gate set using a deep reinforcement learning (DRL) model, as shown in the inset of Fig. 7(a).…”
Section: B Reinforcement Learning For Error-robust Gate Set Designmentioning
confidence: 99%
“…However, these methods are currently tested on specific limited cases, and insights are difficult to generalize, e.g. see [15][16][17]. In this paper we follow the techniques described in Ref.…”
Section: Pulse Engineering Approachmentioning
confidence: 99%
“…This included both pure control tasks (e.g. [28][29][30][31], even in an experiment [32]) but in particular also the more challenging quantum real-time feedback tasks that rely on adaptive responses to measurement outcomes [33][34][35][36].…”
Section: Introductionmentioning
confidence: 99%
“…On the one hand, this can be an advantage in applying it to experimental setups whose parameters are partially unknown (as emphasized e.g. in [32,35]). On the other hand, much of the training time is spent in learning (implicitly) a model of the dynamics while simultaneously attempting to find good feedback strategies.…”
Section: Introductionmentioning
confidence: 99%