2012
DOI: 10.1007/978-3-642-33460-3_50
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Task Boosting by Exploiting Task Relationships

Abstract: Abstract.Multi-task learning aims at improving the performance of one learning task with the help of other related tasks. It is particularly useful when each task has very limited labeled data. A central issue in multi-task learning is to learn and exploit the relationships between tasks. In this paper, we generalize boosting to the multi-task learning setting and propose a method called multi-task boosting (MTBoost). Different tasks in MTBoost share the same base learners but with different weights which are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 36 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…It has been proved in [4], [5] that problem ( 21) is jointly convex with respect to W, b, and Ω. Problem (21) has been extended to multi-task boosting [83] and multi-label learning [84] by learning label correlations. Problem (21) can also been interpreted from the perspective of reproducing kernel Hilbert spaces for vector-valued functions [85], [86], [87], [88].…”
Section: Task Relation Learning Approachmentioning
confidence: 99%
“…It has been proved in [4], [5] that problem ( 21) is jointly convex with respect to W, b, and Ω. Problem (21) has been extended to multi-task boosting [83] and multi-label learning [84] by learning label correlations. Problem (21) can also been interpreted from the perspective of reproducing kernel Hilbert spaces for vector-valued functions [85], [86], [87], [88].…”
Section: Task Relation Learning Approachmentioning
confidence: 99%
“…One way is to group tasks into several clusters in which the tasks in the same group are regarded as related [Bakker and Heskes 2003;Jacob et al 2008;Romera-Paredes et al 2012;Zhou et al 2011]. The other way is to learn the intertask covariance matrix of the multivariate Gaussian prior which can model both positive and negative task correlations [Fei and Huan 2013;Saha et al 2011;Zhang and Yeung 2010, 2012a, 2012b, 2014. Zhang [2015] proposed the convex DMTRC algorithm based on the second way.…”
Section: Negative Transfermentioning
confidence: 98%
“…Multi-Task Learning. State-of-the-art algorithms on multitask learning [46], [20], [47], [48], [49], [25], [50] can also be roughly divided into two categories: (1) Regularized multi-task feature learning methods [46], [20], [48], which assume all tasks are homogeneous and the learning is to discover common feature representation across all tasks. (2) Task relationship exploration methods [47], [49], [25], [51], which either exploit task relationships via trace norm regularization to achieve some similar parameters among similar tasks [51], or try to learn a task covariance matrix from data if the task relationship is not known in advance [47], [25].…”
Section: Related Workmentioning
confidence: 99%