2017
DOI: 10.48550/arxiv.1707.08114
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey on Multi-Task Learning

Abstract: Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach, and then discuss the characteristics of each a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
319
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 309 publications
(320 citation statements)
references
References 159 publications
(171 reference statements)
1
319
0
Order By: Relevance
“…Multi-task Learning. Multi-task learning [30][38] [38][38] is a machine learning paradigm that aims to leverage useful information across multiple tasks to improve performance of each task, which has many applications in various areas such as natural language processing [32], computer vision [15][3] and speech recognition [4] [29]. In early literature of multi-task learning, [5] proposes a kernel method and regularization model to learn multi-task simultaneously.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-task Learning. Multi-task learning [30][38] [38][38] is a machine learning paradigm that aims to leverage useful information across multiple tasks to improve performance of each task, which has many applications in various areas such as natural language processing [32], computer vision [15][3] and speech recognition [4] [29]. In early literature of multi-task learning, [5] proposes a kernel method and regularization model to learn multi-task simultaneously.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-Objective Optimization (MOO) refers to the paradigm of learning multiple related objectives together [5,7,31,58]. Recently, there are four thrusts of MOO methods that consider the negative transfer problem.…”
Section: Related Workmentioning
confidence: 99%
“…However, the commonly-encountered datasparsity problem often hurts the generalization of the learned matching model [15,34], which is usually due to lacking annotation of matching relationships. To tackle this problem, Multi-source Entity-Matching (MEM) which exploits data from one or several auxiliary sources acts as a mainstream solution [6,8,25,58,60,61]. Nevertheless, existing MEM approaches usually assume that data distributions/spaces are shared between sources, or sufficient annotations of entity correspondence between sources can be acquired [6,8,25,60,61], which may not hold in real-applications.…”
Section: Introductionmentioning
confidence: 99%
“…It aims to first learn a global model and then efficiently adapt it to individual robots while minimizing the extra training cost of personalization. This technique is inspired by multi-task learning [22], [27] and meta-learning [26], and is especially important in situations where the cost of personalization is relatively high, such as for low-power robots or mobile phones. Another approach to personalized FL is to weigh the parameters of a global model and unique personalized local model.…”
Section: Related Workmentioning
confidence: 99%