2022
DOI: 10.1109/lra.2021.3139667
|View full text |Cite
|
Sign up to set email alerts
|

LanCon-Learn: Learning With Language to Enable Generalization in Multi-Task Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…"Go around the tree to your left and place the ball." Another popular alternative is language-conditioned learning, where language is employed to specify a reward function, or a task (Silva et al, 2021a;Andreas et al, 2017;Shridhar et al, 2022). Such approaches seek to improve the ability of an agent to complete a task(s) through intermediate language inputs, such as "take the ladder to your left".…”
Section: Learning Strategies From Languagementioning
confidence: 99%
See 1 more Smart Citation
“…"Go around the tree to your left and place the ball." Another popular alternative is language-conditioned learning, where language is employed to specify a reward function, or a task (Silva et al, 2021a;Andreas et al, 2017;Shridhar et al, 2022). Such approaches seek to improve the ability of an agent to complete a task(s) through intermediate language inputs, such as "take the ladder to your left".…”
Section: Learning Strategies From Languagementioning
confidence: 99%
“…In this paper, we develop an approach to solve a task we call automatic strategy translation , wherein we learn to infer strategic intent, in the form of goals and constraints, from language. Prior work has developed methods to utilize language to specify policies of an AI agent (Tambwekar et al, 2021;Gopalan et al, 2018;Thomason et al, 2019;Blukis et al, 2019) or specify reward functions or tasks which can be optimized for, via reinforcement learning (RL) or a planner (Gopalan et al, 2018;Padmakumar et al, 2021;Silva et al, 2021a). However, our work is the first to translate language into goals and constraints, which can be applied towards constrained optimization approaches for directing agent behavior independent of the original human specifier.…”
Section: Introductionmentioning
confidence: 99%
“…[7] FAccT '22, June 21-24, 2022, Seoul, Republic of Korea Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, and Matthew Gombolay LfD warm-starts the process of synthesizing an "optimal" robot control policy with respect to a narrowly defined metric: The robot performs the easier, supervised learning task of imitating a human demonstrator followed by the more difficult problem of perfecting its behavior through RL [21]. Such approaches have been extended to 'zero-shot' settings where the robot is initially trained on a distribution of related tasks, then performs a novel task, such as through guidance from natural language instructions [98,100]. Many learning methods including zero-shot and transfer learning of robot skills continue to rapidly improve [19, 47-49, 93, 100, 111], often without loading dissolution models.…”
Section: Robotics and Ai With And Without Dissolution Modelsmentioning
confidence: 99%
“…Several approaches for LfD have been proposed, including Imitation Learning (IL) (Paleja et al 2020;Wang and Gombolay 2020;Silva et al 2021) and Inverse Reinforcement Learning (IRL) Gombolay 2020, 2021). Unlike IL's direct learning of a mapping from states to demonstrated actions, IRL infers the latent reward functions the demonstrators optimize for.…”
Section: Related Workmentioning
confidence: 99%