2020 IEEE/SICE International Symposium on System Integration (SII) 2020
DOI: 10.1109/sii46433.2020.9025873
|View full text |Cite
|
Sign up to set email alerts
|

Transferring Human Manipulation Knowledge to Robots with Inverse Reinforcement Learning

Abstract: The need for adaptable models, e.g. reinforcement learning, have in recent years been more present within the industry. In this paper, we show how two versions of inverse reinforcement learning can be used to transfer task knowledge from a human expert to a robot in a dynamic environment. Moreover, a second method called Principal Component Analysis weighting is presented and discussed. The method shows potential in the use case but requires some more research.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…A Bayesian Inverse Reinforcement Learning Fail (BIRLF) algorithm allows the agent to use successful and unsuccessful observations by taking advantage of failed demonstrations [154]. In [155], the DDPG and Principal Component Analysis (PCA) methods have been used to show how IRL can transfer task knowledge from a human expert to a robot in a dynamic environment. A method for applying demonstration learning using IRL has been presented in [131].…”
Section: -Active Irlmentioning
confidence: 99%
“…A Bayesian Inverse Reinforcement Learning Fail (BIRLF) algorithm allows the agent to use successful and unsuccessful observations by taking advantage of failed demonstrations [154]. In [155], the DDPG and Principal Component Analysis (PCA) methods have been used to show how IRL can transfer task knowledge from a human expert to a robot in a dynamic environment. A method for applying demonstration learning using IRL has been presented in [131].…”
Section: -Active Irlmentioning
confidence: 99%
“…One promising solution is to use inverse reinforcement learning techniques, which learn reward functions from demonstrations or expert behaviour. By combining inverse reinforcement learning with deep reinforcement learning, robots can learn more effective control policies for deformable object manipulation tasks [116][117][118]123].…”
Section: End-to-end Learningmentioning
confidence: 99%