2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids) 2015
DOI: 10.1109/humanoids.2015.7363535
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing robot striking movement primitives with Iterative Learning Control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
2

Relationship

4
1

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 28 publications
0
8
0
Order By: Relevance
“…We have compared bayesILC to two other ILC methods: batch ILC (32) and ILC with PD feedback (with constant p, d values). PD type ILC with constant p or d values is often too simplistic, and did not yield any improvement in our setup, even after tuning the p, d values.…”
Section: B Real Robot Table Tennismentioning
confidence: 99%
See 2 more Smart Citations
“…We have compared bayesILC to two other ILC methods: batch ILC (32) and ILC with PD feedback (with constant p, d values). PD type ILC with constant p or d values is often too simplistic, and did not yield any improvement in our setup, even after tuning the p, d values.…”
Section: B Real Robot Table Tennismentioning
confidence: 99%
“…However, these approaches are heavily structured for the problem at hand, introducing and tuning additional domain parameters. In [32], we instead proposed to use rhythmic movement primitives that allow for a limit cycle attractor, which is desirable if we want to maintain the striking motion through goal state. After the striking is completed the DMP can be used to return back to initial state or it can be terminated by setting the forcing terms to zero.…”
Section: Appendix B Movement Generation For Table Tennismentioning
confidence: 99%
See 1 more Smart Citation
“…The Frobenius norm of the trajectory deviations, J k , is plotted over the iterations k. Results are averaged over ten experiments, where for each experiment, trajectories, nominal models and actual models are drawn randomly from Gaussian Processes. The performance of the batch pseudoinverse ILC (32) is shown in the red line. Numerical stability issues prevent it from stabilizing at steady state error, whereas recursive ILC (blue line) converges stably.…”
Section: Evaluations and Experimentsmentioning
confidence: 99%
“…Robot table tennis has, since the nineties, captivated the attention of the robot control and learning communities as a challenging and dynamic task, and research in it has been ongoing ever since. After the pioneering work of Anderson's analytical player [1], there have been various approaches focusing on certain parts of the game, such as simplifications in trajectory generation using a virtual hitting plane [2], [3] or learning striking trajectories from demonstrations [4]. Learning approaches to generate better strikes with Reinforcement Learning (RL) include [5], [6].…”
Section: Introductionmentioning
confidence: 99%