Object transportation could be a challenging problem for a single robot due to the oversize and/or overweight issues. A multi-robot system can take the advantage of increased driving power and more flexible configuration to solve such a problem. However, an increased number of individuals also changed the dynamics of the system which makes control of a multi-robot system more complicated. Even worse, if the whole system is sitting on a centralized decision making unit, the data flow could be easily overloaded due to the upscaling of the system. In this research, we propose a decentralized control scheme on a multi-robot system with each individual equipped with a deep Q-network (DQN) controller to perform an oversized object transportation task. DQN is a deep reinforcement learning algorithm, thus does not require the knowledge of system dynamics, instead, it enables the robots to learn appropriate control strategies through trial-and-error style interactions within the task environment. Since analogous controllers are distributed on the individuals, the computational bottleneck is avoided systematically. We demonstrate such a system in a scenario of carrying an oversized rod through a doorway by a two-robot team. The presented multi-robot system learns abstract features of the task and cooperative behaviors are observed. The decentralized DQN-style controller is showing strong robustness against uncertainties. In addition, We propose a universal metric to assess the cooperation quantitatively.
When two skilled human workers cooperate on a task, such as moving a sofa through a tight doorway, they often infer what needs to be done without explicit communication because they have learned cooperation skills from their prior work or training. This paper extends that concept to a two-robot team. The robots are given the task to carry a large payload through a narrow doorway while avoiding obstacles within the room. System dynamics and sensor noise were included in the study. Each robot is independently controlled with the knowledge of the goal location, its own position, and the pose of the payload. The decentralized control uses a Genetic Fuzzy System for each robot to learn its own decision-making skill through a training process without a pre-planned motion trajectory. The introduction of a genetic algorithm adds efficiency to the process of determining the shape of the fuzzy logic membership functions by using an evolutionary search algorithm to tune each parameter in the fuzzy system simultaneously. The contribution of this paper is to illustrate how genetic training can tune a simple, decentralized Fuzzy Logic System based on a given scenario and then be used, unaltered, for a scenario beyond that for which it was trained. The extended scenarios introduce unknown obstacles, new sizes and mass properties for the robots and payload, and random initial positions. The effectiveness of this approach for a 2D case is determined by dynamic simulation with results starting at a 95% success rate for the baseline scenario and 84% for the scenario that was extended furthest from how it was originally trained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.