2020
DOI: 10.1109/tsp.2020.2970336
|View full text |Cite
|
Sign up to set email alerts
|

Adaptation and Learning Over Networks Under Subspace Constraints—Part I: Stability Analysis

Abstract: This paper considers optimization problems over networks where agents have individual objectives to meet, or individual parameter vectors to estimate, subject to subspace constraints that require the objectives across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus optimization as a special case, and allows for more general task relatedness models such as smoothness. While such formulations can be solved via projected gradient descent, the resulting algorithm is… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
56
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 24 publications
(56 citation statements)
references
References 45 publications
0
56
0
Order By: Relevance
“…As explained in Part I [2], in the general complex data case, extended vectors and matrices need to be introduced in order to analyze the network evolution. The arguments and results presented in this section are applicable to both cases of real and complex data through the use of data-type variable h. Table I lists a couple of variables and symbols that will be used in the sequel for both real and complex data cases.…”
Section: Gradient Vectormentioning
confidence: 99%
See 4 more Smart Citations
“…As explained in Part I [2], in the general complex data case, extended vectors and matrices need to be introduced in order to analyze the network evolution. The arguments and results presented in this section are applicable to both cases of real and complex data through the use of data-type variable h. Table I lists a couple of variables and symbols that will be used in the sequel for both real and complex data cases.…”
Section: Gradient Vectormentioning
confidence: 99%
“…A. Modeling assumptions from Part I [2] In this section, we recall the assumptions used in Part I [2] to establish the network mean-square error stability (14). We first introduce the Hermitian Hessian matrix functions (see [2, Sec.…”
Section: Gradient Vectormentioning
confidence: 99%
See 3 more Smart Citations