Exploration is a crucial problem in safety of life applications, such as search and rescue missions. Gaussian processes constitute an interesting underlying data model that leverages the spatial correlations of the process to be explored to reduce the required sampling of data. Furthermore, multiagent approaches offer well known advantages for exploration. Previous decentralized multi-agent exploration algorithms that use Gaussian processes as underlying data model, have only been validated through simulations. However, the implementation of an exploration algorithm brings difficulties that were not tackle yet. In this work, we propose an exploration algorithm that deals with the following challenges: (i) which information to transmit to achieve multi-agent coordination; (ii) how to implement a lightweight collision avoidance; (iii) how to learn the data's model without prior information. We validate our algorithm with two experiments employing real robots. First, we explore the magnetic field intensity with a ground-based robot. Second, two quadcopters equipped with an ultrasound sensor explore a terrain profile. We show that our algorithm outperforms a meander and a random trajectory, as well as we are able to learn the data's model online while exploring.
State-of-the-art multi-robot information gathering (MR-IG) algorithms often rely on a model that describes the structure of the information of interest to drive the robots motion. This causes MR-IG algorithms to fail when they are applied to new IG tasks, as existing models cannot describe the information of interest. Therefore, we propose in this paper a MR-IG algorithm that can be applied to new IG tasks with little algorithmic changes. To this end, we introduce DeepIG: a MR-IG algorithm that uses Deep Reinforcement Learning to allow robots to learn how to gather information. Nevertheless, there are IG tasks for which accurate models have been derived. Therefore, we extend DeepIG to exploit existing models for such IG tasks. This algorithm we term it model-based DeepIG (MB-DeepIG). First, we evaluate DeepIG in simulations, and in an indoor experiment with three quadcopters that autonomously map an unknown terrain profile built in our lab. Results demonstrate that DeepIG can be applied to different IG tasks without algorithmic changes, and that it is robust to measurement noise. Then, we benchmark MB-DeepIG against state-of-the-art informationdriven Gaussian-processes-based IG algorithms. Results demonstrate that MB-DeepIG outperforms the considered benchmarks.
Information gathering (IG) algorithms aim to intelligently select a mobile sensor actions required to efficiently obtain an accurate reconstruction of a physical process, such as an occupancy map, or a magnetic field. Many recent works have proposed algorithms for IG that employ Gaussian processes (GPs) as underlying model of the process. However, most algorithms discretize the state space, which makes them computationally intractable for robotic systems with complex dynamics. Moreover, they are not suited for online information gathering tasks as they assume prior knowledge about GP parameters. This paper presents a novel approach that tackles the two aforementioned issues. Specifically, our approach includes two intertwined steps: (i) a Rapidly-Exploring Random Tree (RRT) search that allows a robot to identify unvisited locations, and to learn the GP parameters, and (ii) an RRT*-based informative path planning that guides the robot towards those locations by maximizing the information gathered while minimizing path cost. The combination of the two steps allows an online realization of the algorithm, while eliminating the need for discretization. We demonstrate that our proposed algorithm outperforms state-of-the-art both in simulations, and in a lab experiment in which a ground-based robot explores the magnetic field intensity within an indoor environment populated with obstacles.
Gas source localization tackles the problem of finding leakages of hazardous substances such as poisonous gases or radiation in the event of a disaster. In order to avoid threats for human operators, autonomous robots dispatched for localizing potential gas sources are preferable. This work investigates a Reinforcement Learning framework that allows a robotic agent to learn how to localize gas sources. We propose a solution that assists Reinforcement Learning with existing domain knowledge based on a model of the gas dispersion process. In particular, we incorporate a priori domain knowledge by designing appropriate rewards and observation inputs for the Reinforcement Learning algorithm. We show that a robot trained with our proposed method outperforms state-of-the-art gas source localization strategies, as well as robots that are trained without additional domain knowledge. Furthermore, the framework developed in this work can also be generalized to a large variety of information gathering tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.