2020
DOI: 10.48550/arxiv.2010.13335
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convergence Acceleration via Chebyshev Step: Plausible Interpretation of Deep-Unfolded Gradient Descent

Abstract: Deep unfolding is a promising deep-learning technique, whose network architecture is based on expanding the recursive structure of existing iterative algorithms. Although convergence acceleration is a remarkable advantage of deep unfolding, its theoretical aspects have not been revealed yet. The first half of this study details the theoretical analysis of the convergence acceleration in deep-unfolded gradient descent (DUGD) whose trainable parameters are step sizes. We propose a plausible interpretation of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(9 citation statements)
references
References 36 publications
0
9
0
Order By: Relevance
“…In this subsection, we will briefly review some basic facts regarding Chebyshev PSOR according to [9].…”
Section: Brief Review Of Chebyshev Psormentioning
confidence: 99%
See 4 more Smart Citations
“…In this subsection, we will briefly review some basic facts regarding Chebyshev PSOR according to [9].…”
Section: Brief Review Of Chebyshev Psormentioning
confidence: 99%
“…as an example. The method in [9] handles more general fixed-point iterations, such as x (k+1) := f (x (k) ) but we here restrict our attention to the simplest case required for the following discussion. If the spectral radius of A satisfies ρ(A) < 1, the linear fixed-point iteration converges to the fixed point x * = 0.…”
Section: Brief Review Of Chebyshev Psormentioning
confidence: 99%
See 3 more Smart Citations