Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-demos.15
|View full text |Cite
|
Sign up to set email alerts
|

Alexa Conversations: An Extensible Data-driven Approach for Building Task-oriented Dialogue Systems

Abstract: Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation. Training each component requires annotations which are hard to obtain for every new domain, limiting scalability of such systems. Similarly, rule-based dialogue systems require extensive writing and maintenance of rules and do not scale either. End-to-End dialogue systems, on the other hand, do not require module-specific annotations but… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…Engaging User Simulators and Assistant systems in synthetically generated dialogues leads to a natural reward function, and many works have used simulators in order to optimize a Reinforcement Learning policy (Schatzmann et al, 2007;Fazel-Zarandi et al, 2017;Peng et al, 2017;Su et al, 2018;Gür et al, 2018;Kreyssig et al, 2018). Such approaches are particularly used for optimizing the policy component of pipeline-based systems (Fazel-Zarandi et al, 2017), and frequently rely on the use of Natural Language Generation (NLG) templates over dialogue acts (Fazel-Zarandi et al, 2017;Shi et al, 2019;Kreyssig et al, 2018;Acharya et al, 2021). Our work instead utilizes fully lexicalized, E2E models for both the User and the Assistant models, without the need for agendas, dialogue acts, or NLG templates.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Engaging User Simulators and Assistant systems in synthetically generated dialogues leads to a natural reward function, and many works have used simulators in order to optimize a Reinforcement Learning policy (Schatzmann et al, 2007;Fazel-Zarandi et al, 2017;Peng et al, 2017;Su et al, 2018;Gür et al, 2018;Kreyssig et al, 2018). Such approaches are particularly used for optimizing the policy component of pipeline-based systems (Fazel-Zarandi et al, 2017), and frequently rely on the use of Natural Language Generation (NLG) templates over dialogue acts (Fazel-Zarandi et al, 2017;Shi et al, 2019;Kreyssig et al, 2018;Acharya et al, 2021). Our work instead utilizes fully lexicalized, E2E models for both the User and the Assistant models, without the need for agendas, dialogue acts, or NLG templates.…”
Section: Related Workmentioning
confidence: 99%
“…Most closely related to our work is that of and Acharya et al (2021). Similar to our approach, both use User Simulators and Schemas to generate synthetic data and add them to a training dataset.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations