2013
DOI: 10.1109/tac.2012.2215413
|View full text |Cite
|
Sign up to set email alerts
|

Regularized Iterative Stochastic Approximation Methods for Stochastic Variational Inequality Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
185
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 149 publications
(188 citation statements)
references
References 34 publications
3
185
0
Order By: Relevance
“…The reader is referred to the recent publications [13] and [17] for a discussion of the challenges associated with this problem and its applications (our use of the "≤" relation instead of the common "≥" is only motivated by the easiness to show the conversion to our formulation). We propose to convert problem (1.3) to the nested form (1.1) by defining the lifted gap function f : Ê n × Ê n → Ê as 4) and the function g : Ê n → Ê n × Ê n as g(x) = x, [H(x)] .…”
Section: Example 1 (Stochastic Variational Inequality)mentioning
confidence: 99%
“…The reader is referred to the recent publications [13] and [17] for a discussion of the challenges associated with this problem and its applications (our use of the "≤" relation instead of the common "≥" is only motivated by the easiness to show the conversion to our formulation). We propose to convert problem (1.3) to the nested form (1.1) by defining the lifted gap function f : Ê n × Ê n → Ê as 4) and the function g : Ê n → Ê n × Ê n as g(x) = x, [H(x)] .…”
Section: Example 1 (Stochastic Variational Inequality)mentioning
confidence: 99%
“…Note that F k refers to the σ-field associated with k and ω k . Unfortunately, the almost-sure convergence statements of standard stochastic approximation schemes are limited to regimes where the map is strongly monotone while regularized variants have shown capable of solving merely monotone or strictly monotone maps [11], [12]. Motivated by this shortcoming, we propose what we believe is the first stochastic extragradient scheme.…”
Section: A Stochastic Extragradient Frameworkmentioning
confidence: 99%
“…Of these, the first approximates expectations by sample averages and examines the consistency of the resulting estimators (sample average approximation (SAA) techniques) [8], [9]. The second approach, inspired by the seminal work by Robbins and Monro [10], is that of stochastic approximation and there has been a surge of recent effort applying such techniques to stochastic variational inequality problems [11], [12], [13], [14]. While most of the above contributions rely on standard diminishing steplength sequences, Yousefian and his coauthors [15], [16] propose techniques in which the steplength sequences are tuned to problem parameters and minimize a suitably defined meansquared error bound.…”
Section: Introductionmentioning
confidence: 99%
“…(ii) Stochastic Nash games. Regularized stochastic approximation schemes were presented for monotone stochastic Nash games [8] while extensions have been developed to contend with misspecification [9] and the lack of Lipschitzian properties [10]. More recently, sampled bestresponse schemes have been developed in [11] while rate statements and iteration complexity bounds have been provided for a class of inexact stochastic best-response schemes in [12]- [14].…”
Section: Introductionmentioning
confidence: 99%