Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing 2022
DOI: 10.18653/v1/2022.deeplo-1.12
|View full text |Cite
|
Sign up to set email alerts
|

Task Transfer and Domain Adaptation for Zero-Shot Question Answering

Abstract: Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks. However, when applying machine learning methods to new domains, labeled data may not always be available. To address this, we use supervised pretraining on source-domain data to reduce sample complexity on domainspecific downstream tasks. We evaluate zeroshot performance on domain-specific reading comprehension tasks by combining task transfer with domain adaptation to fine-tune… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…The definition of 'zero-shot' in this paper follows recent studies(Pan et al, 2022;Zhao et al, 2022), and is similar to unsupervised domain adaptation, as discussed in §2. Another similar usage of 'zero-shot' is found in cross-lingual setups where no task labels are accessible in the target test language but labels in the same task are available in a source language.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…The definition of 'zero-shot' in this paper follows recent studies(Pan et al, 2022;Zhao et al, 2022), and is similar to unsupervised domain adaptation, as discussed in §2. Another similar usage of 'zero-shot' is found in cross-lingual setups where no task labels are accessible in the target test language but labels in the same task are available in a source language.…”
mentioning
confidence: 99%
“…https://beta.openai.com/examples/ default-tldr-summary 10 These are still zero-shot baselines as they do not use indomain task examples.11 This baseline category is similar to contemporaneous work(Pan et al, 2022) where domain-task transfer is achieved through sequential in-domain off-task training followed by general-domain in-task training. Here we do not use indomain task data of any kind.…”
mentioning
confidence: 99%