2020
DOI: 10.48550/arxiv.2012.06701
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Noise-Robust End-to-End Quantum Control using Deep Autoregressive Policy Networks

Abstract: Variational quantum eigensolvers have recently received increased attention, as they enable the use of quantum computing devices to find solutions to complex problems, such as the ground energy and ground state of strongly-correlated quantum many-body systems. In many applications, it is the optimization of both continuous and discrete parameters that poses a formidable challenge. Using reinforcement learning (RL), we present a hybrid policy gradient algorithm capable of simultaneously optimizing continuous an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 99 publications
0
10
0
Order By: Relevance
“…• The proposed MCTS-QAOA algorithm produces accurate results for problems that appear difficult or infeasible for previous algorithms based on the generalized QAOA ansatz, such as RL-QAOA (Yao et al, 2020b). In particular, MCTS-QAOA shows superior performance in the large protocol duration regime, where the hybrid optimization becomes challenging.…”
Section: Contributionsmentioning
confidence: 92%
See 1 more Smart Citation
“…• The proposed MCTS-QAOA algorithm produces accurate results for problems that appear difficult or infeasible for previous algorithms based on the generalized QAOA ansatz, such as RL-QAOA (Yao et al, 2020b). In particular, MCTS-QAOA shows superior performance in the large protocol duration regime, where the hybrid optimization becomes challenging.…”
Section: Contributionsmentioning
confidence: 92%
“…For the methods solving the generalized QAOA problem summarized in Table 1, the CD-QAOA algorithm cannot be applied to problems with noise since the continuous solver is not noise-resilient, while the RL-QAOA algorithm has been shown to be effective with relatively short total duration JT (using unnormalized Hamiltonians (Yao et al, 2020b)). Therefore, we use RL-QAOA as a baseline when evaluating the performance of MCTS-QAOA, and we focus on the more challenging regime of large JT with normalized Hamiltonians 3 .…”
Section: Comparison With Rl-qaoamentioning
confidence: 99%
“…While MPS-based algorithms have been used in the context of optimal many-body control to find highfidelity protocols that manipulate interacting ultracold quantum gases [17][18][19], the advantages of deep reinforcement learning (RL) for quantum control [20], have so far been investigated using exact simulations of only a small number of interacting quantum degrees of freedom. Nevertheless, policy-gradient and value-function RL algorithms have recently been established as useful tools in the study of quantum state preparation [21][22][23][24][25][26][27][28][29][30][31][32][33], quantum error correction and mitigation [34][35][36][37], quantum circuit design [38][39][40][41], and quantum metrology [42,43]; quantum reinforcement learning algorithms have been proposed as well [44][45][46][47][48]. Thus, in times of rapidly developing quantum simulators which exceed the computational capabilities of classical computers [49], the natural question arises regarding scaling up the size of quantum systems in RL control studies beyond exact diagonalization methods.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, we train a deep reinforcement learning agent to minimize the loss of random quantum variational circuits. There have been several previous applications of reinforcement learning to aid with some of the challenges of optimizing QML systems [35][36][37][38][39][40]. However, many of these works are limited in their applicable problem space, e.g.…”
Section: Introductionmentioning
confidence: 99%