2020
DOI: 10.1109/tsp.2020.2987468
|View full text |Cite
|
Sign up to set email alerts
|

Adaptation and learning over networks under subspace constraints Part II: Performance Analysis

Abstract: Part I of this paper considered optimization problems over networks where agents have individual objectives to meet, or individual parameter vectors to estimate, subject to subspace constraints that require the objectives across the network to lie in low-dimensional subspaces. Starting from the centralized projected gradient descent, an iterative and distributed solution was proposed that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 33 publications
0
22
0
Order By: Relevance
“…. , w N } can be written as W = Uw and, therefore, distributed coupled optimization can be recast in the form (2). It can be verified that the coupled diffusion strategy proposed in [23] for solving this problem can be written in the form of (9) and that the (doubly-stochastic) matrix A in [23] satisfies conditions (7) and (8).…”
Section: Distributed Inference Under Subspace Constraintsmentioning
confidence: 96%
See 4 more Smart Citations
“…. , w N } can be written as W = Uw and, therefore, distributed coupled optimization can be recast in the form (2). It can be verified that the coupled diffusion strategy proposed in [23] for solving this problem can be written in the form of (9) and that the (doubly-stochastic) matrix A in [23] satisfies conditions (7) and (8).…”
Section: Distributed Inference Under Subspace Constraintsmentioning
confidence: 96%
“…It can be verified that, when A satisfies (15) over a strongly connected network, the matrix A will satisfy (8), (13), (14), and (12). with overlapping parameter vectors [22]- [24] can also be recast in the form (2). This scenario is illustrated in Fig.…”
Section: Distributed Inference Under Subspace Constraintsmentioning
confidence: 99%
See 3 more Smart Citations