2018 IEEE Conference on Decision and Control (CDC) 2018
DOI: 10.1109/cdc.2018.8619735
|View full text |Cite
|
Sign up to set email alerts
|

On the Location of the Minimizer of the Sum of Two Strongly Convex Functions

Abstract: The problem of finding the minimizer of a sum of convex functions is central to the field of distributed optimization. Thus, it is of interest to understand how that minimizer is related to the properties of the individual functions in the sum. In this paper, we provide an upper bound on the region containing the minimizer of the sum of two strongly convex functions. We consider two scenarios with different constraints on the upper bound of the gradients of the functions. In the first scenario, the gradient co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…For x / ∈ C i ( ), from the definition of convex functions, we have − g i (x), x * i −x ≥ f i (x)−f i (x * i ). Using the inequality (11), we obtain…”
Section: B Proof Of Theoremmentioning
confidence: 99%
See 1 more Smart Citation
“…For x / ∈ C i ( ), from the definition of convex functions, we have − g i (x), x * i −x ≥ f i (x)−f i (x * i ). Using the inequality (11), we obtain…”
Section: B Proof Of Theoremmentioning
confidence: 99%
“…However, the above works focus on single dimensional functions. The extension to general multi-dimensional functions remains largely open, however, since even the region containing the true minimizer of the functions is challenging to characterize in such cases [11]. The recent papers [12], [13] consider a vector version of the resilient decentralized machine learning problem by utilizing block coordinate descent.…”
Section: Introductionmentioning
confidence: 99%
“…In our recent paper, [16] we determined a region containing the possible minimizers of a sum of two strongly convex functions, given only the minimizers of the local functions, their strong convexity parameters, and a bound on their gradients. In contrast, in this paper, we shall consider the case of optimizing a sum of known and unknown functions where only limited information about the unknown function is available.…”
Section: Introductionmentioning
confidence: 99%
“…In a recent line of work, Kuwaranancharoen and Sundaram [5]- [7] study this problem for m = 2 differentiable and strongly convex functions, with the additional knowledge of an upper bound on the norm of the gradient of each summand at the minimizer x ⋆ (see Proposition (3.3) for our result in a similar setting). Using a geometric approach, they provide a near-exact characterization of the set of possible minimizers, using ad hoc coordinates (more precisely, [7,Theorem 6.2] characterizes the boundary of the region of potential minimizers, which coincides with its interior).…”
Section: Introductionmentioning
confidence: 99%