2020
DOI: 10.22331/q-2020-08-31-314
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic gradient descent for hybrid quantum-classical optimization

Abstract: Within the context of hybrid quantum-classical optimization, gradient descent based optimizers typically require the evaluation of expectation values with respect to the outcome of parameterized quantum circuits. In this work, we explore the consequences of the prior observation that estimation of these quantities on quantum hardware results in a form of stochastic gradient descent optimization. We formalize this notion, which allows us to show that in many relevant cases, including VQE, QAOA and certain quant… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
130
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 209 publications
(135 citation statements)
references
References 30 publications
3
130
0
2
Order By: Relevance
“…The generated distributions are plotted in Figure 5b, and it is evident that the training is unaffected by the number of circuit runs. This is in agreement with the result from previous works, [ 47,48 ] where it was shown that for various hybrid quantum‐classical optimization algorithms, the estimation of the expectation values can be done using very small number of measurements. The number of samples can be considered a hyper‐parameter of the algorithm that could be tuned or adjusted during the calculation.…”
Section: Theory Of Hybrid Quantum Generative Adversarial Networksupporting
confidence: 92%
“…The generated distributions are plotted in Figure 5b, and it is evident that the training is unaffected by the number of circuit runs. This is in agreement with the result from previous works, [ 47,48 ] where it was shown that for various hybrid quantum‐classical optimization algorithms, the estimation of the expectation values can be done using very small number of measurements. The number of samples can be considered a hyper‐parameter of the algorithm that could be tuned or adjusted during the calculation.…”
Section: Theory Of Hybrid Quantum Generative Adversarial Networksupporting
confidence: 92%
“…As a result, we get ∂ x C(x) = C(x + π/4) − C(x − π/4), which is the so-called parameter shift rule, described in Fig. 1, often used for training quantum circuits [29,38,43]. Note that, with the formalism of the previous section,Ĥ = 0 corresponds to the use of the simpler parametric unitaries of Eq.…”
Section: Stochastic Parameter Shift Rulementioning
confidence: 95%
“…Figure 1: Parameter Shift Rule [29,38,43], only applicable to parametric gates as in Eq. (6) or, more generally, to parametrizations e ixV whereV has two distinct eigenvalues ±u.…”
Section: Stochastic Parameter Shift Rulementioning
confidence: 99%
See 1 more Smart Citation
“…The proposed model uses hinge squared loss without regulation and with constant learning rate to cross‐entropy for binary classification 32 . The SGD has been used previously in energy‐modeling investigations such as energy‐entropy competition, 33 wind energy forecasting, 34 and quantum energy physics 35 …”
Section: Methodsmentioning
confidence: 99%