Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.287
|View full text |Cite
|
Sign up to set email alerts
|

NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation

Abstract: We propose NeuralWOZ, a novel dialogue collection framework that uses model-based dialogue simulation. NeuralWOZ has two pipelined models, Collector and Labeler. Collector generates dialogues from (1) user's goal instructions, which are the user context and task constraints in natural language, and (2) system's API call results, which is a list of possible query responses for user requests from the given knowledge base. Labeler annotates the generated dialogue by formulating the annotation as a multiple-choice… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 28 publications
0
12
0
Order By: Relevance
“…We compare against recent systems that can perform zero-shot DST: TRADE , MA-DST , SUMBT (Lee et al, 2019), and GPT2-DST . Our method is complementary to systems for creating synthetic in-domain dialogs (Kim et al, 2021).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare against recent systems that can perform zero-shot DST: TRADE , MA-DST , SUMBT (Lee et al, 2019), and GPT2-DST . Our method is complementary to systems for creating synthetic in-domain dialogs (Kim et al, 2021).…”
Section: Methodsmentioning
confidence: 99%
“…As discussed in Section 4, our method is orthogonal to and thus can be used simultaneously with techniques for creating synthetic in-domain training data for DST (Campagna et al, 2020;Kim et al, 2021). For slot queries, we use templated questions of the form:…”
Section: Dialog State Tracking Notesmentioning
confidence: 99%
“…Improvements in these simulators contribute to overall enhancements in dialogue system performance and their ability to handle diverse user inputs and scenarios. Lastly, we explore end-to-end approaches that aim to directly generate dialogue without explicitly defining intermediate steps or modules [4,19,28,32,58]. End-to-end models offer the advantage of encapsulating the entire dialogue generation process within a single model, simplifying both training and inference procedures.…”
Section: Contentmentioning
confidence: 99%
“…They usually define task schema, rules and templates to simulate certain scenarios in the task-oriented dialogue (TOD). Kim et al (2021b) proposed neural simulation approach using pre-trained LMs for a fast domain adaptation in the TOD. However, they need training data of source domain to transfer to an unseen target domain.…”
Section: Synthetic Dialogue Generationmentioning
confidence: 99%