We present an approach for navigating in unknown environments while, simultaneously, gathering information for inspecting underwater structures using an autonomous underwater vehicle (AUV). To accomplish this, we first use our pipeline for mapping and planning collision-free paths online, which endows an AUV with the capability to autonomously acquire optical data in close proximity. With that information, we then propose a reconstruction pipeline to create a photo-realistic textured 3D model of the inspected area. These 3D models are also of particular interest to other fields of study in marine sciences, since they can serve as base maps for environmental monitoring, thus allowing change detection of biological communities and their environment over time. Finally, we evaluate our approach using the Sparus II, a torpedo-shaped AUV, conducting inspection missions in a challenging, real-world and natural scenario.
In this letter, we propose a method to automate the exploration of unknown underwater structures for autonomous underwater vehicles (AUVs). The proposed algorithm iteratively incorporates exteroceptive sensor data and replans the next-best-view in order to fully map an underwater structure. This approach does not require prior environment information. However, a safe exploration depth and the exploration area (defined by a bounding box, parameterized by its size, location, and resolution) must be provided by the user. The algorithm operates online by iteratively conducting the following three tasks: (1) Profiling sonar data are first incorporated into a 2-D grid map, where voxels are labeled according to their state (a voxel can be labeled as empty, unseen, occluded, occplane, occupied, or viewed). (2) Useful viewpoints to continue exploration are generated according to the map. (3) A safe path is generated to guide the robot toward the next viewpoint location. Two sensors are used in this approach: a scanning profiling sonar, which is used to build an occupancy map of the surroundings, and an optical camera, which acquires optical data of the scene. Finally, in order to demonstrate the feasibility of our approach, we provide real-world results using the Sparus II AUVThis work was supported by the EXCELLABUST and ARCHROV Projects under Grants H2020-TWINN-2015, CSA, ID: 691980, and DPI2014-57746-C3-3-R. The work of E. Vidal was supported by the Spanish Government through Ph.D. grant FPU14/0549
This study presents a novel octree‐based three‐dimensional (3D) exploration and coverage method for autonomous underwater vehicles (AUVs). Robotic exploration can be defined as the task of obtaining a full map of an unknown environment with a robotic system, achieving full coverage of the area of interest with data from a particular sensor or set of sensors. While most robotic exploration algorithms consider only occupancy data, typically acquired by a range sensor, our approach also takes into account optical coverage, so the environment is discovered with occupancy and optical data of all discovered surfaces in a single exploration mission. In the context of underwater robotics, this capability is of particular interest, since it allows one to obtain better data while reducing operational costs and time. This study expands our previous study in 3D underwater exploration, which was demonstrated through simulation, presenting improvements in the view planning (VP) algorithm and field validation. Our proposal combines VP with frontier‐based (FB) methods, and remains light on computations even for 3D environments thanks to the use of the octree data structure. Finally, this study also presents extensive field evaluation and validation using the Girona 500 AUV. In this regard, the algorithm has been tested in different scenarios, such as a harbor structure, a breakwater structure, and an underwater boulder.
A B S T R A C TRecent advances in structure-from-motion techniques are enabling many scientific fields to benefit from the routine creation of detailed 3D models. However, for a large number of applications, only a single camera is available for the image acquisition, due to cost or space constraints in the survey platforms. Monocular structure-from-motion raises the issue of properly estimating the scale of the 3D models, in order to later use those models for metrology. The scale can be determined from the presence of visible objects of known dimensions, or from information on the magnitude of the camera motion provided by other sensors, such as GPS.This paper addresses the problem of accurately scaling 3D models created from monocular cameras in GPS-denied environments, such as in underwater applications. Motivated by the common availability of underwater laser scalers, we present two novel approaches which are suitable for different laser scaler configurations. A fully-calibrated method enables the use of arbitrary laser setups, while a partially-calibrated method reduces the need for calibration by only assuming parallelism on the laser beams, with no constraints on the camera. The proposed methods have several advantages with respect to the existing methods. By using the known geometry of the scene expressed by the 3D model, along with some parameters of the laser scaler geometry, the need for laser alignment with the optical axis of the camera is removed. Furthermore, the extremely error-prone manual identification of image points on the 3D model, currently required in image-scaling methods, is eliminated as well.The performance of the methods and their applicability was evaluated on both data generated from a realistic 3D model and data collected during an oceanographic cruise in 2017. Three separate laser configurations have been tested, encompassing nearly all possible laser setups, to evaluate the effects of terrain roughness, noise, camera perspective angle and camera-scene distance on the final estimates of scale. In the real scenario, the computation of 6 independent model scale estimates using our fullycalibrated approach, produced values with standard deviation of 0.3%. By comparing the values to the only possible method usable for this dataset, we showed that the consistency of scales obtained for individual lasers is much higher for our approach (0.6% compared to 4%).
In the Mediterranean Sea, gorgonians are among the main habitat‐forming species of benthic communities on the continental shelf and slope, playing an important ecological role in coral gardens. In areas where bottom trawling is restricted, gorgonians represent one of the main components of trammel net bycatch. Since gorgonians are long‐lived and slow‐growing species, impacts derived from fishing activities can have far‐reaching and long‐lasting effects, jeopardizing their long‐term viability. Thus, mitigation and ecological restoration initiatives focusing on gorgonian populations on the continental shelf are necessary to enhance and speed up their natural recovery. Bycatch gorgonians from artisanal fishermen were transplanted into artificial structures, which were then deployed at 85 m depth on the outer continental shelf of the marine protected area of Cap de Creus (north‐west Mediterranean Sea, Spain). After 1 year, high survival rates of transplanted colonies (87.5%) were recorded with a hybrid remotely operated vehicle. This pilot study shows, for the first time, the survival potential of bycatch gorgonians once returned to their habitat on the continental shelf, and suggests the potential success of future scaled‐up restoration activities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.