Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.746
|View full text |Cite
|
Sign up to set email alerts
|

Bridge to Target Domain by Prototypical Contrastive Learning and Label Confusion: Re-explore Zero-Shot Learning for Slot Filling

Abstract: Zero-shot cross-domain slot filling alleviates the data dependence in the case of data scarcity in the target domain, which has aroused extensive research. However, as most of the existing methods do not achieve effective knowledge transfer to the target domain, they just fit the distribution of the seen slot and show poor performance on unseen slot in the target domain. To solve this, we propose a novel approach based on prototypical contrastive learning with a dynamic label confusion strategy for zero-shot s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…Prototypical contrastive learning and label confusion (PCLC) [18]: PCLC is a pipeline approach using contrastive learning and label confusion algorithm for improving the predictions of unseen slots. PCLC achieved state-ofthe-art performance on the Snips dataset by using modified slot name descriptions.…”
Section: Baseline Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Prototypical contrastive learning and label confusion (PCLC) [18]: PCLC is a pipeline approach using contrastive learning and label confusion algorithm for improving the predictions of unseen slots. PCLC achieved state-ofthe-art performance on the Snips dataset by using modified slot name descriptions.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…Pipeline methods [17], [18] were proposed to learn context-based representations to improve the prediction of complete slot entities. A context-based representation captures the general characteristics of slot entities according to contextual information.…”
Section: Multi-relation-based Representationmentioning
confidence: 99%
See 2 more Smart Citations
“…To separate the text features in different domains, we apply the supervised contrastive learning [11] [12] to train the domain feature extractor. We consider the samples from the same domain as positive samples and samples from other domains as the negative samples to keep the text feature of the same domain together and away from other features.…”
Section: Domain Supervised Contrastive Learningmentioning
confidence: 99%