Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1001
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Multi-task Learning for Text Classification

Abstract: Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other. We conduct … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
472
1
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 517 publications
(478 citation statements)
references
References 21 publications
4
472
1
1
Order By: Relevance
“…However we would like to extend the corpus to other domains than sales in order to i) validate the relevance of our model for other types of positions, ii) determine which competencies are common or not across jobs. In that sense, the use of multi-domain models (Liu, Qiu, and Huang 2017) could be of great help. Our model currently considers two labels ("hirable" and "not hirable").…”
Section: Discussionmentioning
confidence: 99%
“…However we would like to extend the corpus to other domains than sales in order to i) validate the relevance of our model for other types of positions, ii) determine which competencies are common or not across jobs. In that sense, the use of multi-domain models (Liu, Qiu, and Huang 2017) could be of great help. Our model currently considers two labels ("hirable" and "not hirable").…”
Section: Discussionmentioning
confidence: 99%
“…Multi-task learning for neural networks in general (Caruana, 1997) and within NLP specifically (Collobert and Weston, 2008;Luong et al, 2016) has been widely studied. Much of the recent work for NLP has centered on neural architecture design: e.g., ensuring only beneficial information is shared across tasks (Liu et al, 2017; or arranging tasks in linguistically-motivated hierarchies (Søgaard and Goldberg, 2016;Hashimoto et al, 2017;Sanh et al, 2019). These contributions are orthogonal to ours because we instead focus on the multi-task training algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Our method concentrates on aspect-term level sentiment domain adaptation by separating the domain-specific aspect features. Bousmalis et al (2016) and Liu et al (2017) separate features into two subspaces by introducing constraints in the learned features. The difference is that our method is more fine-grained and utilizes the explicit aspect knowledge.…”
Section: Related Workmentioning
confidence: 99%