2022
DOI: 10.1088/2058-9565/ac4f30
|View full text |Cite
|
Sign up to set email alerts
|

Sample complexity of learning parametric quantum circuits

Abstract: Quantum computers hold unprecedented potentials for machine learning applications. Here, we prove that physical quantum circuits are PAC (probably approximately correct) learnable on a quantum computer via empirical risk minimization: to learn a parametric quantum circuit with at most $n^c$ gates and each gate acting on a constant number of qubits, the sample complexity is bounded by $\tilde{O}(n^{c+1})$. In particular, we explicitly construct a family of variational quantum circuits with $O(n^{c+1})$ element… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 59 publications
0
13
0
Order By: Relevance
“…Next, we give a comparison to previously known results. Some prior works have studied the generalization capabilities of quantum models, among them the classical learning-theoretic approaches of [81][82][83][84][85][86][87][88][89] ; the more geometric perspective of 17,18 ; and the informationtheoretic technique of 20,37 . Independently of this work, Ref.…”
Section: Discussionmentioning
confidence: 99%
“…Next, we give a comparison to previously known results. Some prior works have studied the generalization capabilities of quantum models, among them the classical learning-theoretic approaches of [81][82][83][84][85][86][87][88][89] ; the more geometric perspective of 17,18 ; and the informationtheoretic technique of 20,37 . Independently of this work, Ref.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, when training using training data D Q (N ), the out-of-distribution risk R P (α opt ) of the optimized parameters α opt after training is controlled in terms of the optimized training cost C D Q (N ) (α opt ) and the indistribution generalization error gen Q,D Q (N ) (α opt ). We can now bound the in-distribution generalization error using already known QML in-distribution generalization bounds [11][12][13][14][15][16][17][18][19][20][21][22][23] (or, indeed, any such bounds that are derived in the future). As a concrete example of guarantees that can be obtained this way, we combine Corollary 1 with an in-distribution generalization bound established in [20] to prove:…”
Section: B Out-of-distribution Generalization For Qnnsmentioning
confidence: 99%
“…As an example of the potential applicability of VQML, VOLUME 4, 2016 it has been observed that these models can be used to solve a variety of financial problems, such as fraud detection and creditworthiness determination [24]- [26]. There has been an active line of studies to characterize the expressivity [27]- [29], the generalizability [30]- [38], and the trainability [39]- [41] of VQML models. However, the specific case of quantum models with discrete-valued inputs has not been investigated as extensively [42], [43].…”
Section: Introductionmentioning
confidence: 99%
“…In Appendix A-B, we present a concrete example of the unitary operators in Equation ( 54) that produces a linear quantum model on a single qubit whose output is expressible as a nontrivial linear combination of all Fourier basis elements for B 3 . This alternative operator, in the case where V k are not trainable, could be used in place of U 3,1 in Equation (38) in the multiqubit case. However, the degree of freedom that the trainable part of the model, O θ , has in choosing the coefficients of the χ P is limited when compared to the ensemble approach.…”
mentioning
confidence: 99%