2021
DOI: 10.1007/s11128-021-03225-7
|View full text |Cite
|
Sign up to set email alerts
|

Learning bounds for quantum circuits in the agnostic setting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…Next, we give a comparison to previously known results. Some prior works have studied the generalization capabilities of quantum models, among them the classical learning-theoretic approaches of [81][82][83][84][85][86][87][88][89] ; the more geometric perspective of 17,18 ; and the informationtheoretic technique of 20,37 . Independently of this work, Ref.…”
Section: Discussionmentioning
confidence: 99%
“…Next, we give a comparison to previously known results. Some prior works have studied the generalization capabilities of quantum models, among them the classical learning-theoretic approaches of [81][82][83][84][85][86][87][88][89] ; the more geometric perspective of 17,18 ; and the informationtheoretic technique of 20,37 . Independently of this work, Ref.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, when training using training data D Q (N ), the out-of-distribution risk R P (α opt ) of the optimized parameters α opt after training is controlled in terms of the optimized training cost C D Q (N ) (α opt ) and the indistribution generalization error gen Q,D Q (N ) (α opt ). We can now bound the in-distribution generalization error using already known QML in-distribution generalization bounds [11][12][13][14][15][16][17][18][19][20][21][22][23] (or, indeed, any such bounds that are derived in the future). As a concrete example of guarantees that can be obtained this way, we combine Corollary 1 with an in-distribution generalization bound established in [20] to prove:…”
Section: B Out-of-distribution Generalization For Qnnsmentioning
confidence: 99%
“…The key challenge in quantum machine learning is to design models that can learn from data and apply their acquired knowledge to perform well on new data [1]. This latter ability is called generalization and has been intensely studied recently [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]. Constructing models that generalize well is essential for quantum machine learning tasks such as variational learning of unitaries [18][19][20][21][22][23][24], which is applied to unitary compiling [11,25,26], quantum simulation [10,27,28], quantum autoencoders [29,30] and black-hole recovery protocols [31].…”
mentioning
confidence: 99%