Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.400
|View full text |Cite
|
Sign up to set email alerts
|

Improving Limited Labeled Dialogue State Tracking with Self-Supervision

Abstract: Existing dialogue state tracking (DST) models require plenty of labeled data. However, collecting high-quality labels is costly, especially when the number of domains increases. In this paper, we address a practical DST problem that is rarely discussed, i.e., learning efficiently with limited labeled data. We present and investigate two self-supervised objectives: preserving latent consistency and modeling conversational behavior. We encourage a DST model to have consistent latent distributions given a perturb… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…Liu et al, 2019 collect their dataset from the logs in the DiDi customer service center. It is restricted to taskoriented scenario, where one speaker is the user and the other is the customer agent, with limited topics and it is also connected to the goal of dialogue state tracking task (Wu et al, 2019a(Wu et al, , 2020b. Recently, Gliwa et al, 2019 introduce the SAMSum corpus, with 16k chat dialogues with manually annotated summaries.…”
Section: Related Workmentioning
confidence: 99%
“…Liu et al, 2019 collect their dataset from the logs in the DiDi customer service center. It is restricted to taskoriented scenario, where one speaker is the user and the other is the customer agent, with limited topics and it is also connected to the goal of dialogue state tracking task (Wu et al, 2019a(Wu et al, , 2020b. Recently, Gliwa et al, 2019 introduce the SAMSum corpus, with 16k chat dialogues with manually annotated summaries.…”
Section: Related Workmentioning
confidence: 99%
“…Few-Shot DST is a promising direction for reducing the need of human annotation while achiev-ing quasi-SOTA performance with a fraction of the training data. Different techniques have been proposed (Wu et al, 2019;Mi et al, 2021;Li et al, 2021b;Gao et al, 2020;Lin et al, 2021b,a;Campagna et al, 2020;Wu et al, 2020b;Su et al, 2021;Peng et al, 2020;. We briefly describe and compare DS2 with existing few-shot models in Section 4.5.…”
Section: Related Workmentioning
confidence: 99%
“…Self-supervised Learning for NLP. Recent studies have verified the effectiveness of selfsupervised learning (Raina et al, 2007) for different NLP tasks by designing proper pretext tasks (Wang et al, 2019;Wu et al, 2019;Banerjee and Baral, 2020;Wu et al, 2020;Rücklé et al, 2020;Shi et al, 2020;Yamada et al, 2020;Xu et al, 2020;Guu et al, 2020). For extractive summarization, Wang et al (2019) design pretext tasks including masking, replacing, and switching sentences in passages to learn contextualized representations.…”
Section: Related Workmentioning
confidence: 99%
“…For extractive summarization, Wang et al (2019) design pretext tasks including masking, replacing, and switching sentences in passages to learn contextualized representations. On dialogue generation, through the inconsistent order detection task (Wu et al, 2019) or utterance prediction and restoration tasks (Wu et al, 2020;Xu et al, 2020), models can learn to capture relationships between utterances in dialogue flows. On question answering, Banerjee and Baral (2020) propose the Knowledge Triplet Learning task to learn multiple-choice QA, and Rücklé et al (2020) use self-supervision for unsupervised transfer of answer matching ability among domains.…”
Section: Related Workmentioning
confidence: 99%