2020
DOI: 10.1109/lra.2020.2996067
|View full text |Cite
|
Sign up to set email alerts
|

Data-Efficient Model Learning and Prediction for Contact-Rich Manipulation Tasks

Abstract: Learning dynamics models is an essential component of model-based reinforcement learning. The learned model can be used for multi-step ahead predictions of the state variable, a process referred to as long-term prediction. Due to the recursive nature of the predictions, the accuracy has to be good enough to prevent significant error buildup. Accurate model learning in contact-rich manipulation is challenging due to the presence of varying dynamics regimes and discontinuities. Another challenge is the discontin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 25 publications
(29 reference statements)
0
7
0
Order By: Relevance
“…Most works that utilize force readings in the learning process can be categorized according to the force control strategy: direct and indirect force control. The former explicitly assumes a force feedback controller and thus a taskframe authority strategy is needed to select which Cartesian axes are position or force controlled [17], [18], [19]. The latter exploits impedance control to indirectly control the forces required by the task [2], [3], which is the approach we leverage in this work.…”
Section: B Force-based and Impedance Learningmentioning
confidence: 99%
“…Most works that utilize force readings in the learning process can be categorized according to the force control strategy: direct and indirect force control. The former explicitly assumes a force feedback controller and thus a taskframe authority strategy is needed to select which Cartesian axes are position or force controlled [17], [18], [19]. The latter exploits impedance control to indirectly control the forces required by the task [2], [3], which is the approach we leverage in this work.…”
Section: B Force-based and Impedance Learningmentioning
confidence: 99%
“…These methods seldom scale to manipulation control which often involves contact with the environment. Learning [13] or even offline modeling of contact dynamics is notoriously difficult. Our previous work [12] made progress by exploiting the passive interaction property between the manipulator and its environment that allowed us to skip learning of contact dynamics.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…To explore the need for incremental hybrid models for systems with piece-wise continuous dynamics (H3), we compared our hybrid model learning strategy with a baseline framework that performs offline long-term prediction of dynamics (Khader et al 2020). It identifies different dynamic modes in the task from a training dataset of desired motion, uses multi-class Support Vector Machines to build guard function to predict mode changes, builds separate Gaussian Process (GP) dynamics model for each mode, and provides a probabilistic algorithm for multi-step prediction of jointspace state variables for the changing-contact manipulation task.…”
Section: Need For Online Learning For Hybrid Systemsmentioning
confidence: 99%