2019
DOI: 10.1103/physreva.99.042327
|View full text |Cite
|
Sign up to set email alerts
|

Learning robust and high-precision quantum controls

Abstract: Robust and high-precision quantum control is extremely important but challenging for the functionization of scalable quantum computation. In this paper, we show that this hard problem can be translated to a supervised machine learning task by treating the time-ordered quantum evolution as a layer-ordered neural network (NN). The seeking of robust quantum controls is then equivalent to training a highly generalizable NN, to which numerous tuning skills matured in machine learning can be transferred. This opens … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
55
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 119 publications
(55 citation statements)
references
References 31 publications
0
55
0
Order By: Relevance
“…The increase of control parameters can then be exploited for fine-tuned shaping of the effective Hamiltonian. Possible objectives of this procedure may be the minimization of the oscillations around the target adiabatic path 45 , or the robustness against selected types of external noise 46,47 . In this direction, the method may be combined with successful techniques from optimal control for carrying-out the optimization task, such as stochastic gradient (learning) algorithms 47 .…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The increase of control parameters can then be exploited for fine-tuned shaping of the effective Hamiltonian. Possible objectives of this procedure may be the minimization of the oscillations around the target adiabatic path 45 , or the robustness against selected types of external noise 46,47 . In this direction, the method may be combined with successful techniques from optimal control for carrying-out the optimization task, such as stochastic gradient (learning) algorithms 47 .…”
Section: Discussionmentioning
confidence: 99%
“…Possible objectives of this procedure may be the minimization of the oscillations around the target adiabatic path 45 , or the robustness against selected types of external noise 46,47 . In this direction, the method may be combined with successful techniques from optimal control for carrying-out the optimization task, such as stochastic gradient (learning) algorithms 47 . All the mentioned features of fmod-STIRAP make it particularly fascinating from the experimental point of view, showing its potential for generalizations to speed up adiabatic passages in more-level configurations, and hence for successful applications in many branches of quantum science.…”
Section: Discussionmentioning
confidence: 99%
“…The performance of SLC approach can be further improved by exploring the richness and diversity of samples. Inspired by deep learning, a batchbased gradient algorithm (b-GRAPE) has been presented for efficiently seeking robust quantum controls, and numerical results showed that b-GRAPE can achieve improved performance over the SLC method for remarkably enhancing the control robustness while maintaining high fidelity (Wu et al 2019). In other applications where we need to enhance the robustness in closed-loop learning control, we may either use the Hessian matrix information (Xing et al 2014) or integrate the idea of SLC into the learning algorithm in searching for robust control fields.…”
Section: Learning-based Quantum Robust Controlmentioning
confidence: 99%
“…This difficulty is further exacerbated when taking into account of control uncertainties. One can employ machine learning methods [15][16][17][18][19][20][21] to deal with this problem. Another promising approach to tackle this problem is to apply gradient-free algorithms, for example, Nelder-Mead algorithm (NM) [22,23] and differential evolution algorithm (DE) [5,6,[24][25][26].…”
Section: Introductionmentioning
confidence: 99%