Proceedings of the First Workshop on Language Grounding for Robotics 2017
DOI: 10.18653/v1/w17-2802
|View full text |Cite
|
Sign up to set email alerts
|

Learning how to Learn: An Adaptive Dialogue Agent for Incrementally Learning Visually Grounded Word Meanings

Abstract: We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data. Within a life-long interactive learning period, the agent, trained using Reinforcement Learning (RL), must be able to handle natural conversations with human users, and achieve good learning performance (i.e. accuracy) while minimising human effort in the learning process. We train and evaluate this system in interaction with a simulated h… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…Situated dialog encompasses various aspects of interaction. These include: situated Natural Language Processing (Bastianelli et al, 2016); situated reference resolution (Misu, 2018); language grounding (Johnson et al, 2017); visual question answer/visual dialog (Antol et al, 2015); dialog agents for learning visually grounded word meanings and learning from demonstration (Yu et al, 2017); and Natural Language Generation (NLG), e.g. of situated instructions and referring expressions (Byron et al, 2009;Kelleher and Kruijff, 2006).…”
Section: Related Workmentioning
confidence: 99%
“…Situated dialog encompasses various aspects of interaction. These include: situated Natural Language Processing (Bastianelli et al, 2016); situated reference resolution (Misu, 2018); language grounding (Johnson et al, 2017); visual question answer/visual dialog (Antol et al, 2015); dialog agents for learning visually grounded word meanings and learning from demonstration (Yu et al, 2017); and Natural Language Generation (NLG), e.g. of situated instructions and referring expressions (Byron et al, 2009;Kelleher and Kruijff, 2006).…”
Section: Related Workmentioning
confidence: 99%
“…This corpus has been utilized to ground deep learning model representations of visual attributes (colors and shapes) in dialogue via interacting with a simulated tutor (Ling and Fidler, 2017;Yu et al, 2017b). Follow-up work has used this data to model a student learning names and colors of shapes using a reinforcement learning framework (Yu et al, 2016(Yu et al, , 2017a.…”
Section: Tutoring Dialogue Corpus Creationmentioning
confidence: 99%
“…Any number of different models can be used to perform this symbol grounding, such as SVMs [59,66], Nearest Neighbor classifiers [17], and Deep Neural Models [34,62]. New concepts can also be described in terms of previously grounded wordse.g.…”
Section: Related Workmentioning
confidence: 99%