Sidescan sonar images are 2D representations of the seabed. The pixel location encodes distance from the sonar and along track coordinate. Thus one dimension is lacking for generating bathymetric maps from sidescan. The intensities of the return signals do, however, contain some information about this missing dimension. Just as shading gives clues to depth in camera images, these intensities can be used to estimate bathymetric profiles. The authors investigate the feasibility of using data driven methods to do this estimation. They include quantitative evaluations of two pixel-to-pixel convolutional neural networks trained as standard regression networks and using conditional generative adversarial network loss functions. Some interesting conclusions are presented as to when to use each training method.
Sidescan sonar is a small and low-cost sensor that can be mounted on most unmanned underwater vehicles (UUVs) and unmanned surface vehicles (USVs). It has the advantages of high resolution and wide coverage, which could be valuable in providing an efficient and cost-effective solution for obtaining the bathymetry when bathymetric data are unavailable. This work proposes a method of reconstructing bathymetry using only sidescan data from large-scale surveys by formulating the problem as a global optimization, where a Sinusoidal Representation Network (SIREN) is used to represent the bathymetry and the albedo and the beam profile are jointly estimated based on a Lambertian scattering model. The assessment of the proposed method is conducted by comparing the reconstructed bathymetry with the bathymetric data collected with a high-resolution multi-beam echo sounder (MBES). An error of 20 cm on the bathymetry is achieved from a large-scale survey. The proposed method proved to be an effective way to reconstruct bathymetry from sidescan sonar data when high-accuracy positioning is available. This could be of great use for applications such as surface vehicles with Global Navigation Satellite System (GNSS) to obtain high-quality bathymetry in shallow water or small autonomous underwater vehicles (AUVs) if simultaneous localization and mapping (SLAM) can be applied to correct the navigation estimate.
Cyber-physical systems (CPSs) comprise a network of sensors and actuators that are integrated with a computing and communication core. Hydrobatic Autonomous Underwater Vehicles (AUVs) can be efficient and agile, offering new use cases in ocean production, environmental sensing and security. In this paper, a CPS concept for hydrobatic AUVs is validated in realworld field trials with the hydrobatic AUV SAM developed at the Swedish Maritime Robotics Center (SMaRC). We present system integration of hardware systems, software subsystems for mission planning using Neptus, mission execution using behavior trees, flight and trim control, navigation and dead reckoning. Together with the software systems, we show simulation environments in Simulink and Stonefish for virtual validation of the entire CPS. Extensive field validation of the different components of the CPS has been performed. Results of a field demonstration scenario involving the search and inspection of a submerged Mini Cooper using payload cameras on SAM in the Baltic Sea are presented. The full system including the mission planning interface, behavior tree, controllers, dead-reckoning and object detection algorithm is validated. The submerged target is successfully detected both in simulation and reality, and simulation tools show tight integration with target hardware.
He is currently a Researcher with the Swedish Maritime Robotics project at KTH. His research interests include robotic sensing and mapping, with a focus on probabilistic reasoning and inference. Most of his recent work has been on applications of specialized neural networks to underwater sonar data. In addition, he is interested in system integration for robust and long-term robotic deployments.John Folkesson (Senior Member, IEEE) received the B.A. degree in physics from Queens College, City University of New York, New York, NY, USA, in 1983, and the M.Sc. degree in computer science and the Ph.D. degree in robotics from the Royal Institute of Technology (KTH),
Implicit neural representations and neural rendering have gained increasing attention for bathymetry estimation from sidescan sonar (SSS). These methods incorporate multiple observations of the same place from SSS data to constrain the elevation estimate, converging to a globally-consistent bathymetric model. However, the quality and precision of the bathymetric estimate are limited by the positioning accuracy of the autonomous underwater vehicle (AUV) equipped with the sonar. The global positioning estimate of the AUV relying on dead reckoning (DR) has an unbounded error due to the absence of a geo-reference system like GPS underwater. To address this challenge, we propose in this letter a modern and scalable framework, NeuRSS, for SSS SLAM based on DR and loop closures (LCs) over large timescales, with an elevation prior provided by the bathymetric estimate using neural rendering from SSS. This framework is an iterative procedure that improves localization and bathymetric mapping. Initially, the bathymetry estimated from SSS using the DR estimate, though crude, can provide an important elevation prior in the nonlinear least-squares (NLS) optimization that estimates the relative pose between two loop-closure vertices in a pose graph. Subsequently, the global pose estimate from the SLAM component improves the positioning estimate of the vehicle, thus improving the bathymetry estimation. We validate our localization and mapping approach on two large surveys collected with a surface vessel and an AUV, respectively. We evaluate their localization results against the ground truth and compare the bathymetry estimation against data collected with multibeam echo sounders (MBES).* A lawn-mower pattern that has the vehicle perform the survey as a series of long parallel lines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.