ii
IntroductionAfter the remarkable successes of recent work visually grounded models of language, the embodied and task-oriented aspects of language learning stand as a natural next challenge. As autonomous robotic agents become increasingly capable and are deployed to progressively more complex environments, expressive, accessible interfaces are becoming essential to realizing the potential of such technologies. Natural language is immediately available to non-expert users and expressive enough to represent complex actions and plans. Can we give instructions to robotic agents to assist with navigation and manipulation tasks in remote settings? Can we talk to robots about the surrounding visual world, and help them interactively learn the language needed to finish a task? To build robots that we can converse with in our homes, offices, hospitals, and warehouses, it is essential that we develop new techniques for linking language to action in the real world.While the opportunity is clear, enabling effective interaction between users and autonomous agents requires addressing some of the core open challenges in NLP while studying new domains and tasks. This workshop aims to explore these challenges, bringing together members of the NLP, robotics, and vision communities to focus on language grounding in robots and other interactive goal-driven systems. The program features twelve new articles and seven cross-submissions from related areas, to be presented as both posters and talks. We are also excited to host remarkable invited speakers, including Regina Barzilay, Joyce Chai, Karl Moritz Hermann, Hadas Kress-Gazit, Terence Langendoen, Percy Liang, Ray Mooney, Nicholas Roy, Stefanie Tellex and Jason Weston.We thank the program committee, the ACL workshop chairs Wei Xu and Jonathan Berant, the invited speakers, and our sponsors DeepMind and Facebook.