2012
DOI: 10.1007/s10994-012-5302-y
|View full text |Cite
|
Sign up to set email alerts
|

Focused multi-task learning in a Gaussian process framework

Abstract: Multi-task learning, learning of a set of tasks together, can improve performance in the individual learning tasks. Gaussian process models have been applied to learning a set of tasks on different data sets, by constructing joint priors for functions underlying the tasks. In these previous Gaussian process models, the setting has been symmetric in the sense that all the tasks have been assumed to be equally important, whereas in settings such as transfer learning the goal is asymmetric, to enhance performance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 10 publications
0
19
0
Order By: Relevance
“…As the dataset with stronger significance might force its own features on to the reconstruction of less stronger dataset. Such a phenomenon is called negative transfer (please refer to [62][63][64] for extensive discussion on this issue). We demonstrate this effect in Section 4.3.…”
Section: Disadvantages/cautionsmentioning
confidence: 99%
“…As the dataset with stronger significance might force its own features on to the reconstruction of less stronger dataset. Such a phenomenon is called negative transfer (please refer to [62][63][64] for extensive discussion on this issue). We demonstrate this effect in Section 4.3.…”
Section: Disadvantages/cautionsmentioning
confidence: 99%
“…They are based on approaches such as sparse linear combination of independent single-task GPs [41], multi-kernel method [42], convolved latent processes [43], and spectral mixture kernels [44]. There are also asymmetric multitask GPs, which model several tasks together with the objective of enhancing the predictions of only a subset of the tasks by transferring information from other tasks to them [45]. Readers who are interested in learning more about advanced multitask GPs are encourage to read the article by Liu et al [46].…”
Section: Multitask Learningmentioning
confidence: 99%
“…We will use a hatk(s) to denote the spectral density of a covariance function k(τ ) in the frequency domain. Using the following definition, the spectral density of kernel function k(τ ) can be given by its Fourier transform:k (s) = k(τ ) e −2πτ sι dτ (7) whereι is the imaginary number. Furthermore, the inverse Fourier transform of spectral densityk(s) is the original kernel function k(τ ).…”
Section: Spectral Mixture Kernelsmentioning
confidence: 99%
“…The extension of GPs to multiple sources of data is known as multi-task Gaussian processes (MTGPs) [3]. MTGPs model temporal or spatial relationships among infinitely many random variables, as scalar GPs, but also account for the statistical dependence across different sources of data (or tasks) [3,4,5,6,7,8,9]. How to choose an appropriate kernel to jointly model the cross covariance between tasks and auto-covariance within each task is the core aspect of MTGPs design [3,10,11,12,5,13,14].…”
Section: Introductionmentioning
confidence: 99%