2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8968091
|View full text |Cite
|
Sign up to set email alerts
|

Combined Task and Action Learning from Human Demonstrations for Mobile Manipulation Applications

Abstract: Learning from demonstrations is a promising paradigm for transferring knowledge to robots. However, learning mobile manipulation tasks directly from a human teacher is a complex problem as it requires learning models of both the overall task goal and of the underlying actions. Additionally, learning from a small number of demonstrations often introduces ambiguity with respect to the intention of the teacher, making it challenging to commit to one model for generalizing the task to new settings. In this paper, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…Welschehold et al approached the problem by mapping the human torso movement for a single arm task [8]. They have further extended their work to learn individual actions and disambiguate overall task goals in a mobile setting from a small number of demonstrations [9].…”
Section: Related Workmentioning
confidence: 99%
“…Welschehold et al approached the problem by mapping the human torso movement for a single arm task [8]. They have further extended their work to learn individual actions and disambiguate overall task goals in a mobile setting from a small number of demonstrations [9].…”
Section: Related Workmentioning
confidence: 99%
“…Most related to our work, several works have leveraged IL for MM tasks [5,6,7,8]. [5] presented a web-based tool for crowdsourcing a large scale dataset of MM tasks, and used it in combination with motion planning for execution on the robot.…”
Section: Related Workmentioning
confidence: 99%
“…[5] presented a web-based tool for crowdsourcing a large scale dataset of MM tasks, and used it in combination with motion planning for execution on the robot. [6] and [7] collected RGBD observations of humans performing tasks such as door opening and tabletop object manipulation, and used hypergraph optimization and a search procedure respectively to adapt these trajectories to be executable by a robot. Lastly, [8] collected VR demonstrations of pick and place actions and extracted a sequence of symbolic actions and action parametrizations to adopt them for use on a robot.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation