2011
DOI: 10.1007/s10107-011-0472-0
|View full text |Cite
|
Sign up to set email alerts
|

Incremental proximal methods for large scale convex optimization

Abstract: We consider the minimization of a sum m i=1 f i (x) consisting of a large number of convex component functions f i . For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
319
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 282 publications
(326 citation statements)
references
References 53 publications
6
319
0
1
Order By: Relevance
“…Using the same principle, a distributed algorithm involving an additional consensus step has been proposed in [27]. Random iterations involving proximal and subgradient operators were considered in [28] and in [29]. In [29], the functions g(ξ, . )…”
Section: Related Workmentioning
confidence: 99%
“…Using the same principle, a distributed algorithm involving an additional consensus step has been proposed in [27]. Random iterations involving proximal and subgradient operators were considered in [28] and in [29]. In [29], the functions g(ξ, . )…”
Section: Related Workmentioning
confidence: 99%
“…Another closely related work is Bertsekas [Ber11]. It proposed an algorithmic framework that alternates incrementally between subgradient and proximal iterations for minimizing a cost function f = m i=1 f i , the sum of a large but finite number of convex components f i , over a constraint set X.…”
Section: • Sampling Schemes For Subgradients G(· V K ) or Component mentioning
confidence: 99%
“…This line of analysis is shared with incremental subgradient and proximal methods (see [NB00], [NB01], [Ber11]). However, here the technical details are more intricate because there are two types of iterations, which involve the two different stepsizes α k and β k .…”
Section: Assumptions and Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation
“…We emphasize that these assumptions hold for a variety of cost functions including regularized squared error loss, hinge loss, and logistic loss [6] and similar assumptions are widely used to analyze the convergence properties of incremental gradient methods in the literature [2,4,7,12,23]. Note that in contrast with many of these analyses, we do not assume that the component functions f i are convex.…”
Section: Assumption 32 (Strong Convexity)mentioning
confidence: 99%