2016 Winter Simulation Conference (WSC) 2016
DOI: 10.1109/wsc.2016.7822133
|View full text |Cite
|
Sign up to set email alerts
|

eg-VSSA: An extragradient variable sample-size stochastic approximation scheme: Error analysis and complexity trade-offs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Then for η < (4(n + 1) 2 /τ 2 ) 1/3 , we have d < 1. Furthermore, by recalling that N k = N 0 q −k , it follows that d k ≤ 5ην 2 1 q k 16N 0 , we obtain the following bound from (16).…”
Section: Nonsmooth Strongly Convex Optimizationmentioning
confidence: 76%
See 2 more Smart Citations
“…Then for η < (4(n + 1) 2 /τ 2 ) 1/3 , we have d < 1. Furthermore, by recalling that N k = N 0 q −k , it follows that d k ≤ 5ην 2 1 q k 16N 0 , we obtain the following bound from (16).…”
Section: Nonsmooth Strongly Convex Optimizationmentioning
confidence: 76%
“…By utilizing variable sample-size (VS) stochastic gradient schemes, linear convergence rates were obtained for strongly convex problems [36,18] and these rates were subsequently improved (in a constant factor sense) through a VS-accelerated proximal method developed by Jalilzadeh et al [17] (called (VS-APM)). In convex regimes, Ghadimi and Lan [12] developed an accelerated framework that admits the optimal rate of O(1/k 2 ) and the optimal oracle complexity (also see [18]), improving the rate statement presented in [16]. More recently, in [17], Jalilzadeh et al present a smoothed accelerated scheme that admits the optimal rate of O(1/k) and optimal oracle complexity for nonsmooth problems, recovering the findings in [12] in the smooth regime.…”
Section: Convexitymentioning
confidence: 99%
See 1 more Smart Citation
“…Randomized smoothing techniques have also been employed by [38] together with recursive steplengths (see [25] for a review) (b) Variance reduction. In strongly convex regimes (without acceleration), a linear rate of convergence in expected error was first shown for variance-reduced gradient methods by [33] and revisited by [15], while similar rates were provided for extragradient methods by [14]; the accelerated counterpart (VS-APM) improves the complexity to O( L/µ log(1/ )). In smooth regimes, an accelerated scheme was first presented by [12] where every iteration requires two prox evals., admitting the optimal rate and oracle complexity of O(1/k 2 ) and O(1/ 2 ), respectively.…”
Section: Prior Researchmentioning
confidence: 83%
“…Over the last 15 years, there has been a surge of advances in stochastic first-order schemes, including extragradient methods (Juditsky et al 2011, Yousefian et al 2014, Jalilzadeh and Shanbhag 2016, accelerated schemes Lan 2016, Jalilzadeh et al 2018b), variance-reduced schemes (Shanbhag and Blanchet 2015, Jofré and Thompson 2019), deterministic step length schemes for nonconvex programs , and SQN schemes (Lucchi et al 2015, Zhou et al 2017, Jalilzadeh et al 2018a. We review some recent advances in stochastic trust-region and line-search methods.…”
Section: Stochastic Iterative Schemes For Stochastic Nonconvex Optimimentioning
confidence: 99%