Assembly tasks performed with a robot often fail due to unforeseen situations, regardless of the fact that we carefully learned and optimized the assembly policy. This problem is even more present in humanoid robots acting in an unstructured environment where it is not possible to anticipate all factors that might lead to the failure of the given task. In this work, we propose a concurrent LfD framework, which associates demonstrated exception strategies to the given context. Whenever a failure occurs, the proposed algorithm generalizes past experience regarding the current context and generates an appropriate policy that solves the assembly issue. For this purpose, we applied PCA on force/torque data, which generates low dimensional descriptor of the current context. The proposed framework was validated in a peg-in-hole (PiH) task using Franka-Emika Panda robot.
In this study we propose a new method to enhance the performance of iterative learning control (ILC). We focus on robotic tasks dealing with adaptation to the unknown or partially known environment, where the robot has to learn the environment geometry in order to perform the desired task with the given reference forces and torques. The initial motion trajectories are obtained by kinesthetic teaching, whereas the required forces and torques are prescribed by the task. We are interested in incremental learning, which assures smooth and safe operation, aiming at handling of delicate, fragile objects, such as objects made of glass. In order to achieve these goals we propose a new adaptive ILC scheme, where the adaptation is supervised by reinforcement learning. We also show how to apply ILC to orientational motion, taking into account the curved geometry of SO(3). The performance of the proposed algorithm is verified on a bi-manual glass wiping task. † The research leading to these results has received funding from the EU Horizon 2020 Programme grant no. 680431, ReconCell, and from the Slovenian Research Agency under grant agreement no. J2-7360.
Traditional robot programming is often not feasible in small-batch production, as it is time-consuming, inefficient, and expensive. To shorten the time necessary to deploy robot tasks, we need appropriate tools to enable efficient reuse of existing robot control policies. Incremental Learning from Demonstration (iLfD) and reversible Dynamic Movement Primitives (DMP) provide a framework for efficient policy demonstration and adaptation. In this paper, we extend our previously proposed framework with improvements that provide better performance and lower the algorithm’s computational burden. Further, we analyse the learning stability and evaluate the proposed framework with a comprehensive user study. The proposed methods have been evaluated on two popular collaborative robots, Franka Emika Panda and Universal Robot UR10.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.