2020
DOI: 10.1609/aaai.v34i04.6139
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Primal-Dual Optimization for Online Multi-Task Learning

Abstract: Conventional online multi-task learning algorithms suffer from two critical limitations: 1) Heavy communication caused by delivering high velocity of sequential data to a central machine; 2) Expensive runtime complexity for building task relatedness. To address these issues, in this paper we consider a setting where multiple tasks are geographically located in different places, where one task can synchronize data with others to leverage knowledge of related tasks. Specifically, we propose an adaptive primal-du… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…(vi) In multi-task multi-view learning, each task exploits multi-view data. Recent years witness extensive studies on streaming data, known as online multi-task learning [209], this class of methods is used when training data in multiple tasks arrive sequentially, hence (vii) in multi-task online learning, each task is to process sequential data.…”
Section: Multi-task Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…(vi) In multi-task multi-view learning, each task exploits multi-view data. Recent years witness extensive studies on streaming data, known as online multi-task learning [209], this class of methods is used when training data in multiple tasks arrive sequentially, hence (vii) in multi-task online learning, each task is to process sequential data.…”
Section: Multi-task Learningmentioning
confidence: 99%
“…Given the nature of its process, MTL has been studied under the decentralized settings where each machine learns a separate, but related, task. In this vein, multiple parallel and distributed MTL models have been introduced in the recent literature [209][210][211]. Recently, research in MTL using DNN has produced a wide spectrum of approaches that have yielded impressive results on some tasks and application such as image processing [212], NLP [213] and biomedicine [214].…”
Section: Multi-task Learningmentioning
confidence: 99%
“…The most critical research on parallel learning is to parallelize Gradient Descent [19], [50], [56], [68] for its widespread adoption on machine learning tasks. Based on these researches, a great many high-level machine learning models can be transferred to the distributed learning paradigms, for example, distributed ridge regression [47], distributed multi-task learning [57], distributed tensor decomposition [21], and distributed deep learning [2]. However, parallel learning assumes that data is centralized in one party, which has multiple computing clusters.…”
Section: Distributed Computing On Optimization Approachesmentioning
confidence: 99%