U n c o r r e c t e d A u t h o r P r o o fIntelligenza Artificiale xx (20xx) Abstract. In multi-task reinforcement learning (MTRL), the objective is to simultaneously learn multiple tasks and exploit their similarity to improve the performance w.r.t. single-task learning. In this paper we investigate the case when all the tasks can be accurately represented in a linear approximation space using the same small subset of the original (large) set of features. This is equivalent to assuming that the weight vectors of the task value functions are jointly sparse, i.e., the set of their non-zero components is small and it is shared across tasks. Building on existing results in multi-task regression, we develop two multi-task extensions of the fitted Q-iteration algorithm. While the first algorithm assumes that the tasks are jointly sparse in the given representation, the second one learns a transformation of the features in the attempt of finding a more sparse representation. For both algorithms we provide a sample complexity analysis and numerical simulations.
There is evidence that people expects to be able to play games with autonomous robots, so that robogames could be one of the next killer applications for Robotics. Physically Interactive RoboGames (PIRG) is a new application field where autonomous robots are involved in games requiring physical interaction with people. Since research in this field is moving its first steps, definitions and design guidelines are still largely missing. n this paper, a definition for PIRG is proposed, together with guidelines for their design. Physically Interactive, Competitive RoboGames (PICoRG) are also introduced. They are a particular kind of PIRG where human players are involved in a challenging, highly interactive and competitive game activity with autonomous robots. The development process of a PICoRG, Jedi Trainer , is presented to show a practical application of the proposed guidelines. The game has been successfully played in different unstructured environments, by general public; feedback is reported and analysed.
In the context of humanoid skill learning, movement primitives have gained much attention because of their compact representation and convenient combination with a myriad of optimization approaches. Among them, a well-known scheme is to use Dynamic Movement Primitives (DMPs) with reinforcement learning (RL) algorithms. While various remarkable results have been reported, skill learning with physical constraints has not been sufficiently investigated. For example, when RL is employed to optimize the robot joint trajectories, the exploration noise could drive the resulting trajectory out of the joint limits. In this paper, we focus on robot skill learning characterized by joint limit avoidance, by introducing the novel Constrained Dynamic Movement Primitives (CDMPs). By controlling a set of transformed states (called exogenous states) instead of the original DMPs states, CDMPs are capable of maintaining the joint trajectories within the safety limits. We validate CDMPs on the humanoid robot iCub, showing the applicability of our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.