2021
DOI: 10.48550/arxiv.2110.03572
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bridge to Target Domain by Prototypical Contrastive Learning and Label Confusion: Re-explore Zero-Shot Learning for Slot Filling

Abstract: Zero-shot cross-domain slot filling alleviates the data dependence in the case of data scarcity in the target domain, which has aroused extensive research. However, as most of the existing methods do not achieve effective knowledge transfer to the target domain, they just fit the distribution of the seen slot and show poor performance on unseen slot in the target domain. To solve this, we propose a novel approach based on prototypical contrastive learning with a dynamic label confusion strategy for zero-shot s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…We plan cover the following: Contrastive Data Augmentation for NLP (Shen et al, 2020;; Text Classification (Fang et al, 2020;Kachuee et al, 2020;Suresh and Ong, 2021;Du et al, 2021;Carlsson et al, 2021;Qiu et al, 2021;Klein and Nabi, 2021); Sentence Embeddings Sedghamiz et al, 2021) including Quick-Thought (Logeswaran and Lee, 2018),Sentence-BERT (Reimers and Gurevych, 2019), Info-Sentence BERT , SimCSE (Gao et al, 2021b), DeCLUTR (Giorgi et al, 2020, ConSERT (Yan et al, 2021b), Di-alogueCSE (Liu et al, 2021a). We will also cover discourse analysis (Iter et al, 2020;Kiyomaru and Kurohashi, 2021); Information Extraction (Qin et al, 2020; Machine Translation Vamvas and Sennrich, 2021); Question Answering (Karpukhin et al, 2020;You et al, 2021;; Summarization (Duan et al, 2019; including faithfulness (Cao and Wang, 2021), summary evaluation (Wu et al, 2020a), multilingual summarization , and dialogue summarization ; Text Generation (Chai et al, 2021; including logicconsistent text generation (Shu et al, 2021), paraphrase generation , grammatical error correction , dialogue generation (Cai et al, 2020), x-ray report generation Yan et al, 2021a), data-to-text generation (Uehara et al, 2020); Few-shot Learning Wang et al, 2021c;Luo et al, 2021;Das et al, 2021); Language Model Contrastive Pretraining (Wu…”
Section: Presentersmentioning
confidence: 99%
“…We plan cover the following: Contrastive Data Augmentation for NLP (Shen et al, 2020;; Text Classification (Fang et al, 2020;Kachuee et al, 2020;Suresh and Ong, 2021;Du et al, 2021;Carlsson et al, 2021;Qiu et al, 2021;Klein and Nabi, 2021); Sentence Embeddings Sedghamiz et al, 2021) including Quick-Thought (Logeswaran and Lee, 2018),Sentence-BERT (Reimers and Gurevych, 2019), Info-Sentence BERT , SimCSE (Gao et al, 2021b), DeCLUTR (Giorgi et al, 2020, ConSERT (Yan et al, 2021b), Di-alogueCSE (Liu et al, 2021a). We will also cover discourse analysis (Iter et al, 2020;Kiyomaru and Kurohashi, 2021); Information Extraction (Qin et al, 2020; Machine Translation Vamvas and Sennrich, 2021); Question Answering (Karpukhin et al, 2020;You et al, 2021;; Summarization (Duan et al, 2019; including faithfulness (Cao and Wang, 2021), summary evaluation (Wu et al, 2020a), multilingual summarization , and dialogue summarization ; Text Generation (Chai et al, 2021; including logicconsistent text generation (Shu et al, 2021), paraphrase generation , grammatical error correction , dialogue generation (Cai et al, 2020), x-ray report generation Yan et al, 2021a), data-to-text generation (Uehara et al, 2020); Few-shot Learning Wang et al, 2021c;Luo et al, 2021;Das et al, 2021); Language Model Contrastive Pretraining (Wu…”
Section: Presentersmentioning
confidence: 99%
“…Recent works (Liu et al 2020;He et al 2020;Wang et al 2021;Du et al 2021;Yu et al 2021) take different formulations to model the zero-shot slot-filling problem. For example, the works in (Liu et al 2020;He et al 2020;Wang et al 2021) treat zero-shot slot filling as a two-stage sequence labeling task. In the first stage, they perform BIO classification to identify potential slot spans.…”
Section: Introductionmentioning
confidence: 99%