2020
DOI: 10.1109/tsp.2020.3021247
|View full text |Cite
|
Sign up to set email alerts
|

Online Proximal Learning Over Jointly Sparse Multitask Networks With $\ell _{\infty, 1}$ Regularization

Abstract: Modeling relations between local optimum parameter vectors to estimate in multitask networks has attracted much attention over the last years. This work considers a distributed optimization problem with jointly sparse structure among nodes, that is, the local solutions have the same sparse support set. Several mixed norm have been proposed to address the jointly sparse structure in the literature. Among several candidates, the (reweighted) ∞,1-norm is element-wise separable, it is more convenient to evaluate t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 58 publications
0
3
0
Order By: Relevance
“…More recent works based on OGD and first order algorithms were proposed by [19,20]. They explored new structural regularizations and prove theoretical convergence improvements.…”
Section: B Online Multi-task Learningmentioning
confidence: 99%
“…More recent works based on OGD and first order algorithms were proposed by [19,20]. They explored new structural regularizations and prove theoretical convergence improvements.…”
Section: B Online Multi-task Learningmentioning
confidence: 99%
“…During the past decade, distributed detection, estimation, and tracking problems have attracted substantial attention in the context of adaptive networks with diffusion strategies [1][2][3]. Several multitask strategies for adaptation and learning over networks have been recently proposed based on the diffusion least-mean-squares (DLMS) algorithm [4][5][6][7][8][9][10][11]. The DLMS algorithm for multitask networks was first proposed in [4,5], and studied in asynchronous networks in [6].…”
Section: Introductionmentioning
confidence: 99%
“…A new multitask learning formulation using a common latent representation was presented in [7], as well as a unified framework to analyze its performance. Both 1-norm regularization and ∞,1-norm regularization were introduced into multitask networks in [9] and [10], respectively. Recently, the performance of multitask DLMS algorithm has been analyzed in the presence of communication delays [11].…”
Section: Introductionmentioning
confidence: 99%