Proceedings of the 2016 SIAM International Conference on Data Mining 2016
DOI: 10.1137/1.9781611974348.75
|View full text |Cite
|
Sign up to set email alerts
|

Learning Correlative and Personalized Structure for Online Multi-Task Classification

Abstract: Multi-Task Learning (MTL) can enhance the classifier's generalization performance by learning multiple related tasks simultaneously. Conventional MTL works under the offline or batch learning setting and suffers from the expensive training cost together with the poor scalability. To address such inefficiency issues, online learning technique has been applied to solve MTL problems. However, most existing algorithms for online MTL constrain task relatedness into a presumed structure via a single weight matrix, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…Although the composite problem (11) can be solved by [49], the composite function with linear constraints has not been investigated to solve the MTL problem. We employ a projected gradient scheme [50], [51] to optimize this problem with both smooth and nonsmooth terms.…”
Section: Optimizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Although the composite problem (11) can be solved by [49], the composite function with linear constraints has not been investigated to solve the MTL problem. We employ a projected gradient scheme [50], [51] to optimize this problem with both smooth and nonsmooth terms.…”
Section: Optimizationmentioning
confidence: 99%
“…We employ a projected gradient scheme [50], [51] to optimize this problem with both smooth and nonsmooth terms. Specifically, by substituting (12) into (11) and omitting the terms unrelated to U and V , the problem can be rewritten as a projected gradient schema,…”
Section: Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…For the first limitation (imbalanced classes), cost-sensitive learning [11] and positive-unlabeled matrix completion algorithms [13] have been proposed to exploit an asymmetric cost of error from positive and unlabeled samples. For the second limitation (outlier estimation), robust matrix decomposition algorithm [14] has been developed to detect the outliers and learn basis from the recovered subspace, and it showed success in different applications, including system identification [10], multi-task learning [47], [44], PCA [8] and graphical modeling [9]. Although each aforementioned algorithm, e.g.…”
Section: Introductionmentioning
confidence: 99%