Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.64
|View full text |Cite
|
Sign up to set email alerts
|

How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds

Abstract: We seek to create agents that both act and communicate with other agents in pursuit of a goal. Towards this end, we extend LIGHT (Urbanek et al., 2019)-a large-scale crowd-sourced fantasy text-game-with a dataset of "quests". 1 . These contain natural language motivations paired with in-game goals and human demonstrations; completing a quest might require dialogue or actions (or both). We introduce a reinforcement learning system that (1) incorporates large-scale language modeling-based and commonsense reason… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 18 publications
(18 citation statements)
references
References 40 publications
0
18
0
Order By: Relevance
“…Previous works also considered identifying a task by textual goal specifications (Bahdanau et al, 2019;Fu et al, 2019). In the domain of text-based games, such goal-conditioned RL setting has been studied with the quest generation tasks (Ammanabrolu et al, 2019(Ammanabrolu et al, , 2021. In our work, we specify a subtask by its goal.…”
Section: Hierarchical Reinforcement Learningmentioning
confidence: 99%
“…Previous works also considered identifying a task by textual goal specifications (Bahdanau et al, 2019;Fu et al, 2019). In the domain of text-based games, such goal-conditioned RL setting has been studied with the quest generation tasks (Ammanabrolu et al, 2019(Ammanabrolu et al, , 2021. In our work, we specify a subtask by its goal.…”
Section: Hierarchical Reinforcement Learningmentioning
confidence: 99%
“…The authors design LIGHT to investigate how 'a model can both speak and act grounded in perception of its environment and dialogue from other speakers'. Ammanabrolu et al [2020b] extended this by providing a system that incorporates '1) large-scale language modelling based commonsense reasoning pre-training to imbue the agent with relevant priors and 2) a factorized action space of commands and dialogue'. Furthermore, evaluation can be performed against a dataset collected of held-out human demonstrations.…”
Section: Understanding Parser Feedback and Language Acquisitionmentioning
confidence: 99%
“…Benchmarks on embodied agents and language Benchmarks featuring language and agents embodied in physical worlds already exists, however many of them only consider the aforementioned instruction-following [Chevalier-Boisvert et al, 2018a, Misra et al, 2018, Ruis et al, 2020 and question-answering [Das et al, 2017, Gordon et al, 2018 scenarios. In between disembodied NLP testbeds [Wang et al, 2018, Zadeh et al, 2019 and previous embodied benchmarks is the LIGHT environment [Urbanek et al, 2019], a multiplayer text adventure game allowing to study social settings requiring complex dialogue production [Ammanabrolu et al, 2020, Prabhumoye et al, 2020. Instead of the virtual embodiment of text-worlds, SocialAI tackles the arguably harder and richer setting of egocentric embodiment among embodied social peers.…”
Section: Related Workmentioning
confidence: 99%
“…In the last few years however, recent advances in both DRL and NLP made the Machine Learning community reconsider experiments with language based interactions [Luketina et al, 2019, Bender andKoller, 2020]. Text-based exploratory games have been leveraged to study the capacities of autonomous agents to properly navigate through language in abstract worlds [Côté et al, 2018, Prabhumoye et al, 2020, Ammanabrolu et al, 2020. While these environments allow meaningful abstractions, they neglect the importance of embodiment for language learning, which has long been identified as an essential component for proper language understanding and grounding [Cangelosi et al, 2010, Bisk et al, 2020.…”
Section: Introductionmentioning
confidence: 99%