2017
DOI: 10.48550/arxiv.1707.02920
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…Boularias et al [1], Finn et al [6]), or, as in our case, via adversarial learning [15]. When expert actions or expert policies are available, behavioral cloning can be used (Rahmatizadeh et al [36], James et al [17], Duan et al [5]). Alternatively, expert trajectories can be used as additional training data for off-policy algorithms such as DPG (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Boularias et al [1], Finn et al [6]), or, as in our case, via adversarial learning [15]. When expert actions or expert policies are available, behavioral cloning can be used (Rahmatizadeh et al [36], James et al [17], Duan et al [5]). Alternatively, expert trajectories can be used as additional training data for off-policy algorithms such as DPG (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Kinesthetic teaching is not intuitive and can result in unwanted visual artifacts [9], [11]. Using motion capture devices for teleoperation, such as [37], is more intuitive and can solve this issue. However, the human teacher typically observes the scene through a different angle from the robot, which may render certain objects only visible to the human or the robot (due to occlusions), making imitation challenging.…”
mentioning
confidence: 99%
“…Prior work has shown that robots can acquire a range of complex skills through demonstration, such as table tennis [28], lane following [34], pouring water [31], drawer opening [38], and multi-stage manipulation tasks [62]. However, the most effective methods for robot imitation differ significantly from how humans and animals might imitate behaviors: while robots typically need to receive demonstrations in the form of kinesthetic teaching [32,1] or teleoperation [8,35,62], humans and animals can acquire the gist of a behavior simply by watching someone else. In fact, we can adapt to variations in morphology, context, and task details effortlessly, compensating for whatever domain shift may be present and recovering a skill that we can use in new situations [6].…”
Section: Introductionmentioning
confidence: 99%
“…1 II. RELATED WORK Most imitation learning and learning from demonstration methods operate at the level of configuration-space trajectories [44,2], which are typically collected using kinesthetic teaching [32,1], teleoperation [8,35,62], or sensors on the demonstrator [11,9,7,21]. Instead, can we allow robots to imitate just by watching the demonstrator perform the task?…”
Section: Introductionmentioning
confidence: 99%