2009
DOI: 10.1016/j.automatica.2009.09.031
|View full text |Cite
|
Sign up to set email alerts
|

Finite dimensional approximation and Newton-based algorithm for stochastic approximation in Hilbert space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…This is the first work to look at the classical gradient and accelerated gradient descent, which means we are able to demonstrate convergence using a new range of step sizes. Turning our attention to the previous body of uncertainty quantification for SA algorithms, in [8] a similar spectral approach is taken, but truncated to a finite dimension at all iterations. They truncate by letting x(θ) = u i B i , for some finite family of functions B i , and then perform a standard Stochastic Approximation procedure to calculate the coefficients u i .…”
Section: Previous Work Much Of the Previous Work Surrounding Chaos Ex...mentioning
confidence: 99%
“…This is the first work to look at the classical gradient and accelerated gradient descent, which means we are able to demonstrate convergence using a new range of step sizes. Turning our attention to the previous body of uncertainty quantification for SA algorithms, in [8] a similar spectral approach is taken, but truncated to a finite dimension at all iterations. They truncate by letting x(θ) = u i B i , for some finite family of functions B i , and then perform a standard Stochastic Approximation procedure to calculate the coefficients u i .…”
Section: Previous Work Much Of the Previous Work Surrounding Chaos Ex...mentioning
confidence: 99%
“…In [17], Kulkarni and Borkar provide a finite dimensional procedure to approximate the minimum of ζ → L(ζ(v), v)µ(dv) on some subset of the real valued functions, for some explicit non-negative function L satisfying some strict convexity properties with respect to (w.r.t.) its first argument.…”
Section: Define (13)mentioning
confidence: 99%
“…They analyze the error due to finite dimensional truncation. However, this analysis makes apparent that, in order to achieve actual convergence results (as opposed to error bounds in [17]), one really needs to let m increase during the algorithm. Moreover, even when the function ζ is explicitly known (which is not the case in our setup) and estimating individual coefficients u i in (1.4) is straightforward by MC simulations, the global convergence of a method where more and more coefficients are computed by Monte Carlo is nontrivial, subject to a fine-tuning of the speeds at which the number of coefficients and the number of simulations go to infinity (see [13]).…”
Section: Define (13)mentioning
confidence: 99%
See 1 more Smart Citation
“…In the operations research and control community, there are two applications of random projections we are aware of. First [1], where a low-rank approximation is used to approximately find the zero of a linear equation, and second [18] where a similar approximation is used within a Newton methodbased stochastic approximation. Our work differs from [1] in two fundamental ways.…”
Section: Introductionmentioning
confidence: 99%