Multi-agent path finding (MAPF) is an indispensable component of large-scale robot deployments in numerous domains ranging from airport management to warehouse automation. In particular, this work addresses lifelong MAPF (LMAPF) -an online variant of the problem where agents are immediately assigned a new goal upon reaching their current one -in dense and highly structured environments, typical of real-world warehouse operations. Effectively solving LMAPF in such environments requires expensive coordination between agents as well as frequent replanning abilities, a daunting task for existing coupled and decoupled approaches alike. With the purpose of achieving considerable agent coordination without any compromise on reactivity and scalability, we introduce PRIMAL 2 , a distributed reinforcement learning framework for LMAPF where agents learn fully decentralized policies to reactively plan paths online in a partially observable world. We extend our previous work, which was effective in low-density sparsely occupied worlds, to highly structured and constrained worlds by identifying behaviors and conventions which improve implicit agent coordination, and enabling their learning through the construction of a novel local agent observation and various training aids. We present extensive results of PRIMAL 2 in both MAPF and LMAPF environments with up to 1024 agents and compare its performance to complete state-of-the-art planners. We experimentally observe that agents successfully learn to follow ideal conventions and can exhibit selfless coordinated maneuvers that maximize joint rewards. We find that not only does PRIMAL 2 significantly surpass our previous work, it is also able to perform on par and even outperform state-of-theart planners in terms of throughput.
Multi-agent foraging (MAF) involves distributing a team of agents to search an environment and extract resources from it. Many foraging algorithms use biologicallyinspired signaling mechanisms, such as pheromones, to help agents navigate from resources back to a central nest while relying on local sensing only. However, these approaches often rely on predictable pheromone dynamics and/or perfect robot localization. In nature, certain environmental factors (e.g., heat or rain) can disturb or destroy pheromone trails, while imperfect sensing can lead robots astray. In this work, we propose ForMIC, a distributed reinforcement learning MAF approach that relies on pheromones as a way to endow agents with implicit communication abilities via their shared environment. Specifically, full agents involuntarily lay trails of pheromones as they move; other agents can then measure the local levels of pheromones to guide their individual decisions. We show how these stigmergic interactions among agents can lead to a highlyscalable, decentralized MAF policy that is naturally resilient to common environmental disturbances, such as depleting resources and sudden pheromone disappearance. We present simulation results that compare our learning policy against existing stateof-the-art MAF algorithms, in a set of experiments varying team sizes, number and placement of resources, and key environmental disturbances. Our results demonstrate that our learned policy outperforms these baselines, approaching the performance of a planner with full observability and centralized agent allocation.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.