2021
DOI: 10.1109/tnnls.2020.3028022
|View full text |Cite
|
Sign up to set email alerts
|

Task Similarity Estimation Through Adversarial Multitask Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…Multitask Learning: Several multitask learning models have been designed which aim to share knowledge across tasks to improve these tasks' performance (Ruder 2017;Hashimoto et al 2016;Zhang and Yang 2017;Guo et al 2018;Vandenhende et al 2020;Gagné 2019;Zhou et al 2020a). For example, Standley et al (2020) proposed a framework where tasks are grouped and learned by exploiting the cooperative and competitive relationships among the tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Multitask Learning: Several multitask learning models have been designed which aim to share knowledge across tasks to improve these tasks' performance (Ruder 2017;Hashimoto et al 2016;Zhang and Yang 2017;Guo et al 2018;Vandenhende et al 2020;Gagné 2019;Zhou et al 2020a). For example, Standley et al (2020) proposed a framework where tasks are grouped and learned by exploiting the cooperative and competitive relationships among the tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Even though significant progress has been achieved, recent successes in machine learning, especially in the deep learning area, usually rely on a large amount of labelled data to obtain a small generalization error. In practice, however, acquiring labelled data could be highly prohibitive, e.g., when classifying multiple objects in an image (Long et al 2017), when analyzing patient data in healthcare data analysis (Wang and Pineau 2015;Zhou et al 2021b), or when modelling users' products preferences (Murugesan and Carbonell 2017). Data hungry has become a long-term problem for deep learning.…”
Section: Introductionmentioning
confidence: 99%
“…It has been shown with benefits to reduce the amount of annotated data per task to reach the desired performance. The crucial idea behind MTL is to extract and leverage the knowledge and information shared across the tasks to improve the overall performances (Wang et al 2019b), which can be achieved by task-invariant feature learning (Maurer, Pontil, and Romera-Paredes 2016;Luo, Tao, and Wen 2017) or task relation learning (Zhang and Yeung 2012;Bingel and Søgaard 2017;Zhou et al 2021b). One major issue with most of the existing feature learning approaches is that they only align the marginal distributions P(x) to extract the shared features without taking advantage of label information of the tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Even though significant progress has been achieved, recent successes in machine learning, especially in the deep learning area, usually rely on a large amount of labelled data to obtain a small generalization error. In practice, however, acquiring labelled data could be highly prohibitive, e.g., when classifying multiple objects in an image (Long et al 2017), when analyzing patient data in healthcare data analysis (Wang and Pineau 2015;Zhou et al 2021b), or when modelling users' products preferences (Murugesan and Carbonell 2017). Data hungry has become a long-term problem for deep learning.…”
Section: Introductionmentioning
confidence: 99%