“…In comparison to the conventional single-task learning, multi-task learning (MTL) [1] provides a new learning paradigm to leverage knowledge across re-lated tasks for improving the generalization performance of tasks. The community of multi-task learning has overlaps with other domains like transfer learning [2], multi-view learning [3] and multi-fidelity modeling [4]. Among current MTL paradigms, multi-task Gaussian process (MTGP), the topic of this paper, inherits the non-parametric, Bayesian property of Gaussian process (GP) [5] to have not only the prediction mean but also the associated prediction variance, thus showcasing widespread applications, e.g., multi-task regression and classification, multi-variate time series analysis [6], multi-task Bayesian optimization [7,8], and multi-view learning [9].…”