Abstract-We present an approach allowing a robot to acquire new motor skills by learning the couplings across motor control variables. The demonstrated skill is first encoded in a compact form through a modified version of Dynamic Movement Primitives (DMP) which encapsulates correlation information. Expectation-Maximization based Reinforcement Learning is then used to modulate the mixture of dynamical systems initialized from the user's demonstration. The approach is evaluated on a torque-controlled 7 DOFs Barrett WAM robotic arm. Two skill learning experiments are conducted: a reaching task where the robot needs to adapt the learned movement to avoid an obstacle, and a dynamic pancake-flipping task.
A method to learn and reproduce robot force interactions in a Human-Robot Interaction setting is proposed. The method allows a robotic manipulator to learn to perform tasks which require exerting forces on external objects by interacting with a human operator in an unstructured environment. This is achieved by learning two aspects of a task: positional and force profiles. The positional profile is obtained from task demonstrations via kinesthetic teaching. The force profile is obtained from additional demonstrations via a haptic device. A human teacher uses the haptic device to input the desired forces which the robot should exert on external objects during the task execution.The two profiles are encoded as a mixture of dynamical systems, which is used to reproduce the task satisfying both the positional and force profiles. An active control strategy based on task-space control with variable stiffness is then proposed to reproduce the skill. The method is demonstrated with two experiments in which the robot learns an ironing task and a door opening task.
In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in the context of robotics, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Three recent examples for the application of reinforcement learning to real-world robots are described: a pancake flipping task, a bipedal walking energy minimization task and an archery-based aiming task. In all examples, a state-of-the-art expectation-maximization-based reinforcement learning is used, and different policy representations are proposed and evaluated for each task. The proposed policy representations offer viable solutions to six rarely-addressed challenges in policy representations: correlations, adaptability, multi-resolution, globality, multi-dimensionality and convergence. Both the successes and the practical difficulties encountered in these examples are discussed. Based on insights from these particular cases, conclusions are drawn about the state-of-the-art and the future perspective directions for reinforcement learning in robotics
Abstract-We propose a new algorithm capable of online regeneration of walking gait patterns. The algorithm uses a nonlinear optimization technique to find step parameters that will bring the robot from the present state to a desired state. It modifies online not only the footstep positions, but also the step timing in order to maintain dynamic stability during walking. Inclusion of step time modification extends the robustness against rarely addressed disturbances, such as pushes towards the stance foot. The controller is able to recover dynamic stability regardless of the source of the disturbance (e.g. model inaccuracy, reference tracking error or external disturbance).We describe the robot state estimation and center-of-mass feedback controller necessary to realize stable locomotion on our humanoid platform COMAN. We also present a set of experiments performed on the platform that show the performance of the feedback controller and of the gait pattern regenerator. We show how the robot is able to cope with series of pushes, by adjusting step times and positions.
Abstract-We present an integrated approach allowing a free-standing humanoid robot to acquire new motor skills by kinesthetic teaching. The proposed method controls simultaneously the upper and lower body of the robot with different control strategies. Imitation learning is used for training the upper body of the humanoid robot via kinesthetic teaching, while at the same time Reaction Null Space method is used for keeping the balance of the robot. During demonstration, a force/torque sensor is used to record the exerted forces, and during reproduction, we use a hybrid position/force controller to apply the learned trajectories in terms of positions and forces to the end effector. The proposed method is tested on a 25-DOF Fujitsu HOAP-2 humanoid robot with a surface cleaning task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.