The estimation of the geometric structure of objects located underwater underpins a plethora of applications such as mapping shipwrecks for archaeology, monitoring the health of coral reefs, detecting faults in offshore oil rigs and pipelines, detection and identification of potential threats on the seabed, etc. Acoustic imaging is the most popular choice for underwater sensing. Underwater exploratory vehicles typically employ wide-aperture Sound Navigation and Ranging (SONAR) imaging sensors. Although their wide aperture enables scouring large volumes of water ahead of them for obstacles, the resulting images produced are blurry due to integration over the aperture. Performing three-dimensional (3D) reconstruction from this blurry data is notoriously difficult. This challenging inverse problem is further exacerbated by the presence of speckle noise and reverberations. The state-of-the-art methods in 3D reconstruction from sonar either require bulky and expensive matrix-arrays of sonar sensors or additional narrow-aperture sensors. Due to its low footprint, the latter induces gaps between reconstructed scans. Avoiding such gaps requires slow and cumbersome scanning by the vehicles that carry the scanners. In this paper, we present two reconstruction methods enabling on-site 3D reconstruction from imaging sonars of any aperture. The first of these presents an elegant linear formulation of the problem, as a blind deconvolution with a spatially varying kernel. The second method is a simple algorithmic approach for approximate reconstruction, using a nonlinear formulation. We demonstrate that our simple approximation algorithms perform 3D reconstruction directly from the data recorded by wideaperture systems, thus eliminating the need for multiple sensors to be mounted on underwater vehicles for this purpose. Additionally, we observe that the wide aperture may be exploited to improve the coverage of the reconstructed samples (on the scanned object's surface). We demonstrate the efficacy of our algorithms on simulated as well as real data acquired using two sensors, and we compare our work to the state of the art in sonar reconstruction. Finally, we show the employability of our reconstruction methods on field data gathered by an autonomous underwater vehicle. K E Y W O R D Sdeconvolution, perception, SONAR processing, 3D reconstruction, underwater robotics
SONAR mapping of underwater environments leads to dense point-clouds. These maps have large memory footprints, are inherently noisy and consist of raw data with no semantic information. This paper presents an approach to underwater semantic mapping where known man-made structures that appear in multibeam SONAR data are automatically recognised. The input to the algorithm consists of SONAR images acquired by an Autonomous Underwater Vehicle (AUV) and a catalogue of 'guessed' 3D CAD models of structures that may potentially be found in the data. The output of our algorithm is online 3D mapping, with navigation correction. In addition, for any objects in the input catalogue, the dense point clouds of those objects are replaced with the corresponding CAD model with correct pose. Our method operates with a catalogue of coarse CAD models and proves to be suitable for online semantic mapping of a partially man-made underwater environment such as a typical oil field. The semantic world model can finally be generated at the desired resolution making it useful for both offline and online usual processing such as mission planning, data analysis, manipulation or vehicle relocalisation.Our algorithm proceeds in two phases. First we recognise objects using an efficient, rotation-invariant 2D descriptor combined with a histogram-based method. Then, we determine pose using a 6 degree-of-freedom registration of the 3D object to the local scene using a fast 2D correlation which is refined with an iterative closest point (ICP) -based method. After structures have been located and identified, we build a semantic representation of the world resulting in a lightweight yet accurate world model. We demonstrate the applicability of our method on field data acquired by an AUV in Loch Eil, Scotland.
Due to the expensive nature of field data gathering, the lack of training data often limits the performance of Automatic Target Recognition (ATR) systems. This problem is often addressed with domain adaptation techniques, however the currently existing methods fail to satisfy the constraints of resource and time-limited underwater systems. We propose to address this issue via an online fine-tuning of the ATR algorithm using a novel data-selection method. Our proposed data-mining approach relies on visual similarity and outperforms the traditionally-employed hard-mining methods. We present a comparative performance analysis in a wide range of simulated environments and highlight the benefits of using our method for the rapid adaptation to previously-unseen environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.