2019
DOI: 10.48550/arxiv.1907.13627
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Disentangled Relational Representations for Explaining and Learning from Demonstration

Abstract: Learning from demonstration is an effective method for human users to instruct desired robot behaviour. However, for most non-trivial tasks of practical interest, efficient learning from demonstration depends crucially on inductive bias in the chosen structure for rewards/costs and policies. We address the case where this inductive bias comes from an exchange with a human user. We propose a method in which a learning agent utilizes the information bottleneck layer of a high-parameter variational neural model, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Reasoning about spatial references has been explored in various contexts such as instruction following for 2D and 3D navigation (MacMahon et al, 2006;Vogel and Jurafsky, 2010;Chen and Mooney, 2011;Artzi and Zettlemoyer, 2013;Kim and Mooney, 2013;Andreas and Klein, 2015;Fried et al, 2018;Liu et al, 2019;Jain et al, 2019;Gaddy and Klein, 2019;Hristov et al, 2019;Chen et al, 2019) and situated dialog for robotic manipulation (Skubic et al, 2002;Kruijff et al, 2007;Kelleher and Costello, 2009;Landsiedel et al, 2017). Most of these approaches utilize supervised data, either in the form of policy demonstrations or target geometric representations.…”
Section: Spatial Reasoning In Textmentioning
confidence: 99%
See 1 more Smart Citation
“…Reasoning about spatial references has been explored in various contexts such as instruction following for 2D and 3D navigation (MacMahon et al, 2006;Vogel and Jurafsky, 2010;Chen and Mooney, 2011;Artzi and Zettlemoyer, 2013;Kim and Mooney, 2013;Andreas and Klein, 2015;Fried et al, 2018;Liu et al, 2019;Jain et al, 2019;Gaddy and Klein, 2019;Hristov et al, 2019;Chen et al, 2019) and situated dialog for robotic manipulation (Skubic et al, 2002;Kruijff et al, 2007;Kelleher and Costello, 2009;Landsiedel et al, 2017). Most of these approaches utilize supervised data, either in the form of policy demonstrations or target geometric representations.…”
Section: Spatial Reasoning In Textmentioning
confidence: 99%
“…'above', 'below') to perceptual processes like visual signals. While such early grounding efforts were limited by computational bottlenecks, several deep neural architectures have been recently proposed that jointly process text and visual input (Janner et al, 2017;Misra et al, 2017;Bisk et al, 2016;Liu et al, 2019;Jain et al, 2019;Gaddy and Klein, 2019; arXiv:2005.00696v1 [cs.CL] 2 May 2020 Hristov et al, 2019;Yu et al, 2018). While these approaches have made significant advances in improving the ability of agents at following spatial instructions, they are either not easily interpretable or require pre-specified parameterization to induce interpretable modules (Bisk et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…[18] has shown that there exist semantics in the latent space of generative adverserial networks (GANs), and [19] successfully decomposes the latent factor in a GAN into structured semantic parts. In addition to GANs, [20] has learned disentangled latent representations in a variational autoencoder (VAE) framework to ground spatial relations between objects. Unlike the information bottleneck motivation of [19], we use metric learning [8] to capture information such as maneuvers and interactions.…”
Section: A Related Workmentioning
confidence: 99%