We present PLUMES, a planner for localizing and collecting samples at the global maximum of an a priori unknown and partially observable continuous environment. This "maximum seek-and-sample" (MSS) problem is pervasive in the environmental and earth sciences. Experts want to collect scientifically valuable samples at an environmental maximum (e.g., an oil-spill source), but do not have prior knowledge about the phenomenon's distribution. We formulate the MSS problem as a partially-observable Markov decision process (POMDP) with continuous state and observation spaces, and a sparse reward signal. To solve the MSS POMDP, PLUMES uses an information-theoretic reward heuristic with continuousobservation Monte Carlo Tree Search to efficiently localize and sample from the global maximum. In simulation and field experiments, PLUMES collects more scientifically valuable samples than state-of-the-art planners in a diverse set of environments, with various platforms, sensors, and challenging real-world conditions.
This paper proposes a bandwidth tunable technique for real-time probabilistic scene modeling and mapping to enable co-robotic exploration in communication constrained environments such as the deep sea. The parameters of the system enable the user to characterize the scene complexity represented by the map, which in turn determines the bandwidth requirements. The approach is demonstrated using an underwater robot that learns an unsupervised scene model of the environment and then uses this scene model to communicate the spatial distribution of various high-level semantic scene constructs to a human operator. Preliminary experiments in an artificially constructed tank environment as well as simulated missions over a 10m×10m coral reef using real data show the tunability of the maps to different bandwidth constraints and science interests. To our knowledge this is the first paper to quantify how the free parameters of the unsupervised scene model impact both the scientific utility of and bandwidth required to communicate the resulting scene model. I. INTRODUCTIONThe challenges of exploration in remote and extreme environments such as the deep seas [1], [2], cave systems [3], outer space [4] and during or after a natural disaster [5], [6] have much in common. It is expensive and inherently dangerous for humans to explore such locations directly; hence, the use of mobile robots is desirable. However, if communication bottlenecks exist in the environment, prohibiting live streaming of video or other sensor data, then direct control of the robots is generally not possible. This paper describes a novel approach to co-robotic exploration in communication starved environments, and presents a system implementation of an under-sea exploration robot for corobotic exploration of marine environments.Although physically controlling a robot can be achieved over relatively low bandwidth, it is difficult to transmit the scene information necessary for an operator or scientist to make high level navigational decisions. We propose a spatially correlated Chinese Restaurant Process (CRP)-based [7] scene understanding model, that can be tuned to operate with
Abstract-The gap between our ability to collect interesting data and our ability to analyze these data is growing at an unprecedented rate. Recent algorithmic attempts to fill this gap have employed unsupervised tools to discover structure in data. Some of the most successful approaches have used probabilistic models to uncover latent thematic structure in discrete data. Despite the success of these models on textual data, they have not generalized as well to image data, in part because of the spatial and temporal structure that may exist in an image stream.We introduce a novel unsupervised machine learning framework that incorporates the ability of convolutional autoencoders to discover features from images that directly encode spatial information, within a Bayesian nonparametric topic model that discovers meaningful latent patterns within discrete data. By using this hybrid framework, we overcome the fundamental dependency of traditional topic models on rigidly hand-coded data representations, while simultaneously encoding spatial dependency in our topics without adding model complexity. We apply this model to the motivating application of highlevel scene understanding and mission summarization for exploratory marine robots. Our experiments on a seafloor dataset collected by a marine robot show that the proposed hybrid framework outperforms current state-of-the-art approaches on the task of unsupervised seafloor terrain characterization.
Unsupervised learning techniques, such as Bayesian topic models, are capable of discovering latent structure directly from raw data. These unsupervised models can endow robots with the ability to learn from their observations without human supervision, and then use the learned models for tasks such as autonomous exploration, adaptive sampling, or surveillance. This paper extends single-robot topic models to the domain of multiple robots. The main difficulty of this extension lies in achieving and maintaining global consensus among the unsupervised models learned locally by each robot. This is especially challenging for multi-robot teams operating in communication-constrained environments, such as marine robots.We present a novel approach for multi-robot distributed learning in which each robot maintains a local topic model to categorize its observations and model parameters are shared to achieve global consensus. We apply a combinatorial optimization procedure that combines local robot topic distributions into a globally consistent model based on topic similarity, which we find mitigates topic drift when compared to a baseline approach that matches topics naïvely. We evaluate our methods experimentally by demonstrating multi-robot underwater terrain characterization using simulated missions on real seabed imagery. Our proposed method achieves similar model quality under bandwidth-constraints to that achieved by models that continuously communicate, despite requiring less than one percent of the data transmission needed for continuous communication.
Squid are mobile, diverse, ecologically important marine organisms whose behavior and habitat use can have substantial impacts on ecosystems and fisheries. However, as a consequence in part of the inherent challenges of monitoring squid in their natural marine environment, fine-scale behavioral observations of these freeswimming, soft-bodied animals are rare. Bio-logging tags provide an emerging way to remotely study squid behavior in their natural environments. Here, we applied a novel, high-resolution bio-logging tag (ITAG) to seven veined squid, Loligo forbesii, in a controlled experimental environment to quantify their short-term (24 h) behavioral patterns. Tag accelerometer, magnetometer and pressure data were used to develop automated gait classification algorithms based on overall dynamic body acceleration, and a subset of the events were assessed and confirmed using concurrently collected video data. Finning, flapping and jetting gaits were observed, with the lowacceleration finning gaits detected most often. The animals routinely used a finning gait to ascend (climb) and then glide during descent with fins extended in the tank's water column, a possible strategy to improve swimming efficiency for these negatively buoyant animals. Arms-and mantle-first directional swimming were observed in approximately equal proportions, and the squid were slightly but significantly more active at night. These tag-based observations are novel for squid and indicate a more efficient mode of movement than suggested by some previous observations. The combination of sensing, classification and estimation developed and applied here will enable the quantification of squid activity patterns in the wild to provide new biological information, such as in situ identification of behavioral states, temporal patterns, habitat requirements, energy expenditure and interactions of squid through space-time in the wild.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.