2017
DOI: 10.1080/01630563.2016.1254243
|View full text |Cite
|
Sign up to set email alerts
|

A First-Order Stochastic Primal-Dual Algorithm with Correction Step

Abstract: We investigate the convergence properties of a stochastic primal-dual splitting algorithm for solving structured monotone inclusions involving the sum of a cocoercive operator and a composite monotone operator. The proposed method is the stochastic extension to monotone inclusions of a proximal method studied in [26,35] for saddle point problems. It consists in a forward step determined by the stochastic evaluation of the cocoercive operator, a backward step in the dual variables involving the resolvent of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
10
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 37 publications
1
10
0
Order By: Relevance
“…Remark 3.7 Recently, there appeared various publications where the convex-concave saddle problems were investigated in stochastic setting; see [3,12,14,21,33,34,24,30,27,36,41,42] for instances and the reference therein. These existing methods are different from our proposed algorithm.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Remark 3.7 Recently, there appeared various publications where the convex-concave saddle problems were investigated in stochastic setting; see [3,12,14,21,33,34,24,30,27,36,41,42] for instances and the reference therein. These existing methods are different from our proposed algorithm.…”
Section: Resultsmentioning
confidence: 99%
“…In other words, they are full-splitting. The stochastic counterparts of some primal-dual splitting methods were also investigated in the literature; see [33,21,14,34] for instances.…”
Section: Introductionmentioning
confidence: 99%
“…In the general case, to ensure the weak almost sure convergence, we not only need the step-size bounded away from 0 but also the summable condition in (3.17). These conditions were used in [7,9,29,30].…”
Section: Remark 39mentioning
confidence: 99%
“…For the finite sum case (2), there exist algorithms of similar spirit such as those in [14,24]. Some algorithms do in fact deal with a similar setup of stochastic gradient like evaluations, see [26], but only for smooth terms in the objective function.…”
Section: Introductionmentioning
confidence: 99%