This paper presents an approach for learning environmental knowledge from task-based human-robot dialog. Previous approaches to dialog use domain knowledge to constrain the types of language people are likely to use. In contrast, by introducing a joint probabilistic model over speech, the resulting semantic parse and the mapping from each element of the parse to a physical entity in the building (e.g., grounding), our approach is flexible to the ways that untrained people interact with robots, is robust to speech to text errors and is able to learn referring expressions for physical locations in a map (e.g., to create a semantic map). Our approach has been evaluated by having untrained people interact with a service robot. Starting with an empty semantic map, our approach is able ask 50% fewer questions than a baseline approach, thereby enabling more effective and intuitive human robot dialog.
Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation. Training each component requires annotations which are hard to obtain for every new domain, limiting scalability of such systems. Similarly, rule-based dialogue systems require extensive writing and maintenance of rules and do not scale either. End-to-End dialogue systems, on the other hand, do not require module-specific annotations but need a large amount of data for training. To overcome these problems, in this demo, we present Alexa Conversations 1 , a new approach for building goal-oriented dialogue systems that is scalable, extensible as well as data efficient. The components of this system are trained in a data-driven manner, but instead of collecting annotated conversations for training, we generate them using a novel dialogue simulator based on a few seed dialogues and specifications of APIs and entities provided by the developer. Our approach provides outof-the-box support for natural conversational phenomena like entity sharing across turns or users changing their mind during conversation without requiring developers to provide any such dialogue flows. We exemplify our approach using a simple pizza ordering task and showcase its value in reducing the developer burden for creating a robust experience. Finally, we evaluate our system using a typical movie ticket booking task and show that the dialogue simulator is an essential component of the system that leads to over 50% improvement in turn-level action signature prediction accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.