2022
DOI: 10.48550/arxiv.2210.02764
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generalization to the Natural Gradient Descent

Abstract: Optimization problem, which is aimed at finding the global minimal value of a given cost function, is one of the central problem in science and engineering. Various numerical methods have been proposed to solve this problem, among which the Gradient Descent (GD) method is the most popular one due to its simplicity and efficiency. However, the GD method suffers from two main issues: the local minima and the slow convergence especially near the minima point. The Natural Gradient Descent(NGD), which has been prov… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…By contrast, we both theoretically and experimentally realized the NDO tomography of a 2 × n open QW system and the reconstruction of the mixed state with the largest number of matrix elements, up to 62 2 , surpassing all previously reported results. Furthermore, this experimental study is also the first time to verify the effectiveness of our self-developed GNGD optimization algorithm (38) for accelerating training on the long-standing challenging task of experimental learning of open QW. The GNGD optimizer can be applied to various neuralnetwork architectures for effectively speeding up the gradientbased tomography of the quantum state (65,66) and the quantum process (67)(68)(69).…”
Section: Discussionmentioning
confidence: 96%
See 4 more Smart Citations
“…By contrast, we both theoretically and experimentally realized the NDO tomography of a 2 × n open QW system and the reconstruction of the mixed state with the largest number of matrix elements, up to 62 2 , surpassing all previously reported results. Furthermore, this experimental study is also the first time to verify the effectiveness of our self-developed GNGD optimization algorithm (38) for accelerating training on the long-standing challenging task of experimental learning of open QW. The GNGD optimizer can be applied to various neuralnetwork architectures for effectively speeding up the gradientbased tomography of the quantum state (65,66) and the quantum process (67)(68)(69).…”
Section: Discussionmentioning
confidence: 96%
“…, between the target and the reconstructed distributions. The process starts with the network parameters being initialized to random values, and in each optimization iteration i, the parameters are updated according to the GNGD procedure (38):…”
Section: Performance Of the Efficient Tomography For Qwmentioning
confidence: 99%
See 3 more Smart Citations