Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.186
|View full text |Cite
|
Sign up to set email alerts
|

Recursive Template-based Frame Generation for Task Oriented Dialog

Abstract: The Natural Language Understanding (NLU) component in task oriented dialog systems processes a user's request and converts it into structured information that can be consumed by downstream components such as the Dialog State Tracker (DST). This information is typically represented as a semantic frame that captures the intent and slot-labels provided by the user. We first show that such a shallow representation is insufficient for complex dialog scenarios, because it does not capture the recursive nature inhere… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…Many dialogue state tracking tasks generate slots and slot values using a copy component (Wu et al, 2019a;Ouyang et al, 2020;Gangadharaiah and Narayanaswamy, 2020;Chen et al, 2020a;Li et al, 2020b). Among them Wu et al (2019a), Ouyang et al (2020) and Chen et al (2020a) solved the problem of multi-domain dialogue state tracking.…”
Section: Copynetmentioning
confidence: 99%
See 1 more Smart Citation
“…Many dialogue state tracking tasks generate slots and slot values using a copy component (Wu et al, 2019a;Ouyang et al, 2020;Gangadharaiah and Narayanaswamy, 2020;Chen et al, 2020a;Li et al, 2020b). Among them Wu et al (2019a), Ouyang et al (2020) and Chen et al (2020a) solved the problem of multi-domain dialogue state tracking.…”
Section: Copynetmentioning
confidence: 99%
“…explored the semantic tagging task in the medical domain to capture the symptom tags in clinical conversations. They use a sliding window on the historical context as a part of the input of a bi-LSTM model and attached a CRF layer on the output side to predict the tags of clinical conversations Gangadharaiah and Narayanaswamy (2020). argued that the shallow output representations of traditional semantic tagging lacked the ability to represent the structured dialogue information.…”
mentioning
confidence: 99%
“…Recent work leverages the success of large pretrained language models to generate long texts such as stories (Rashkin et al, 2020), reviews (Cho et al, 2019a and fake news (Zellers et al, 2019). Most end-user applications for assisting user writing, however, are confined to sentence-level generation Kannan et al, 2016;Alikaniotis and Raheja, 2019;Prabhumoye et al, 2019;Faltings et al, 2021). Our work focuses on document-level writing assistance in which a document sketch is constructed from a set of similar documents.…”
Section: Document Generationmentioning
confidence: 99%
“…These risks have ensured that end-user applications involving text generation (e.g., Smart Compose, Smart Reply, Grammarly) still require a human to remain in control of content and are restricted to individual sentences or even smaller segments of text Figure 1: The right side shows a sketch for writing the report of a future democratic national convention, generated from a pile of previous reports. Kannan et al, 2016;Alikaniotis and Raheja, 2019;Prabhumoye et al, 2019;Faltings et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…As a powerful recurrent model, LSTM showed promising tagging accuracy on the ATIS dataset owing to the memory control of its gate mechanism [127]. [128] argued that the shallow output representations of traditional slot filling lacked the ability to represent the structured dialogue information. To improve, they treated the Slot Filling task as a template-based tree decoding task by iteratively generating and filling in the templates.…”
Section: Slot Fillingmentioning
confidence: 99%