Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining 2019
DOI: 10.1145/3289600.3290978
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Selectively Transfer

Abstract: Deep text matching approaches have been widely studied for many applications including question answering and information retrieval systems. To deal with a domain that has insufficient labeled data, these approaches can be used in a Transfer Learning (TL) setting to leverage labeled data from a resource-rich source domain. To achieve better performance, source domain data selection is essential in this process to prevent the "negative transfer" problem. However, the emerging deep transfer models do not fit wel… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Reinforcement learning (RL) methods are often applied in conversational models for response generation [63], based on the feedback from human quality assessment scores for the output utterances. Other NLP tasks where reinforcement learning can be applied include question answering [92,127], text classification [184] and entity linking [35].…”
Section: Keyword Extraction From Conversational Textmentioning
confidence: 99%
“…Reinforcement learning (RL) methods are often applied in conversational models for response generation [63], based on the feedback from human quality assessment scores for the output utterances. Other NLP tasks where reinforcement learning can be applied include question answering [92,127], text classification [184] and entity linking [35].…”
Section: Keyword Extraction From Conversational Textmentioning
confidence: 99%
“…Existing studies on data selection and robust learning demonstrate a need for test domain knowledge during training. Some data selection work (Moore and Lewis, 2010;Kirchhoff and Bilmes, 2014;van der Wees et al, 2017;Fan et al, 2017;Qu et al, 2019;Kang et al, 2020) chooses critical in-domain data for domain adaptation, and other work defends against adversarial attacks but offers little help for out-of-domain robustness (Taori et al, 2020) under natural distributional shifts (Wang et al, 2021) that occurs more frequently than extreme adversarial cases. This outof-domain robustness is often measured by testing on a specific domain and a single task like sentiment classification (Müller et al, 2019;Hendrycks et al, 2020).…”
Section: Introductionmentioning
confidence: 99%