2020
DOI: 10.48550/arxiv.2009.10435
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CREDIT: Coarse-to-Fine Sequence Generation for Dialogue State Tracking

Abstract: In dialogue systems, a dialogue state tracker aims to accurately find a compact representation of the current dialogue status, based on the entire dialogue history. While previous approaches often define dialogue states as a combination of separate triples (domain-slotvalue), in this paper, we employ a structured state representation and cast dialogue state tracking as a sequence generation problem. Based on this new formulation, we propose a CoaRsE-to-fine DIalogue state Tracking (CREDIT) approach. Taking adv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…The similarity matrix is computed via either a fixed combination method or a K-means sharing method. • CREDIT-RL: CREDIT-RL [7] employs a structured representation to represent dialogue states and casts DST as a sequence generation problem. It also uses a reinforcement loss to fine-tune the model.…”
Section: Comparison Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The similarity matrix is computed via either a fixed combination method or a K-means sharing method. • CREDIT-RL: CREDIT-RL [7] employs a structured representation to represent dialogue states and casts DST as a sequence generation problem. It also uses a reinforcement loss to fine-tune the model.…”
Section: Comparison Methodsmentioning
confidence: 99%
“…Motivated by the advances in reading comprehension [4], DST has been further formulated as a machine reading comprehension problem [13,14,30,31]. Other techniques such as pointer networks [56] and reinforcement learning [7,8,23] have also been applied to DST.…”
Section: Related Workmentioning
confidence: 99%
“…The multi-stage pipeline has been studied in many other text generation tasks. Some coarse-to-fine frameworks generate the intermediate sketches or the coarse text to help the final generation, such as dialogue state tracking (Chen et al, 2020), neural story generation (Fan et al, 2018), and extractive summarization (Xu and Lapata, 2020). More specifically, multi-stage summarization produces the salient in- formation step by step, such as the extract-andsummarize pipeline Subramanian et al, 2019;.…”
Section: Multi-stage Text Generationmentioning
confidence: 99%
“…Sequicity (Lei et al, 2018) is a two-step sequence to sequence model which first encodes the dialogue history and generates a belief span, and then generates a language response from the belief span. COMER (Ren et al, 2019) and CREDIT (Chen et al, 2020b) are hierarchical sequence-to-sequence models which represent the relationships between the intents, slots, and values in a hierarchical way, and employ a multistage decoder. Our proposed approach also uses a sequence-to-sequence model.…”
Section: Dialogue State Trackingmentioning
confidence: 99%
“…The methods regard DST as a classification and/or an extraction problem and independently infer the intent and slot value pairs for the current turn. (2) There are also some methods which view DST as a sequence to sequence problem (Lei et al, 2018;Ren et al, 2019;Chen et al, 2020b). The methods sequentially infer the intent and slot value pairs for the current turn on the basis of dialogue history and usually employ a hierarchical structure for the inference (decoding).…”
Section: Introductionmentioning
confidence: 99%