DOI: 10.14711/thesis-b1155168
|View full text |Cite
|
Sign up to set email alerts
|

A probabilistic framework for learning task relationships in multi-task learning

Abstract: Chapter 1 Introduction 1.1 Multi-Task Learning 1.2 Motivation 1.3 Main Contributions 1.4 Thesis Outline 1.5 Notations Chapter 2 Background 2.1 A Brief Survey of Multi-Task Learning 2.1.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
224
0

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 114 publications
(227 citation statements)
references
References 121 publications
3
224
0
Order By: Relevance
“…In the current transfer counting method, we imposed an assumption that the source and target data sharing a similar manifold representation. Future work will explore ways to relax this assumption through automatic estimation of source-target relevance [37].…”
Section: Resultsmentioning
confidence: 99%
“…In the current transfer counting method, we imposed an assumption that the source and target data sharing a similar manifold representation. Future work will explore ways to relax this assumption through automatic estimation of source-target relevance [37].…”
Section: Resultsmentioning
confidence: 99%
“…Thus, it is crucial how to capture task relatedness and incorporate it into an MTL framework. Although, many different MTL methods [7,12,18,15,28,1] have been proposed, which differ in how the relatedness across multiple tasks is modeled, they all utilize the parameter or structure sharing strategy to capture the task relatedness.…”
Section: Introductionmentioning
confidence: 99%
“…For example, some methods [3,27,28,29], use a regularized probabilistic setting, where sharing among tasks is done based on a common prior. These approaches are usually computationally expensive.…”
Section: Introductionmentioning
confidence: 99%
“…The multi-task Gaussian process (GP) model [10] and its extension [11] are recently proposed methods that adopt this approach under the Bayesian framework. Moreover, Zhang and Yeung proposed a method in [12] to learn task relationships under the regularization framework for classification and regression problems, and then extended it for feature selection problems in [13].…”
Section: Introductionmentioning
confidence: 99%
“…Our point of departure is a regularized method in [12] which learns the task relationships in the form of a task covariance matrix under a regularization framework and is related to maximum a posteriori (MAP) estimation of the weight-space interpretation of the multi-task GP model [10] presented in [11]. We then extend the formulation for boosting to give a method called multi-task boosting (MTBoost).…”
Section: Introductionmentioning
confidence: 99%