The Earth’s surface is mostly water-covered and the ocean is the source of a significant slice on natural resources and renewable energies. However, only a small fraction of the ocean has been surveyed. Being able to estimate the 3D model of the environment from a single video eases the task of surveying the underwater environment, saves costs and opens doors to autonomous exploration of unknown environments. In order to estimate the 3D structure of a vehicle’s surrounding environment, we propose a deep learning based Simultaneous Localization and Mapping (SLAM) method. With our method, it is possible to predict a depth map of a given video frame while, at the same time, estimate the movement of the vehicle between different frames. Our method is completely self-supervised, meaning that it only requires a dataset of videos, without ground truth, to be trained. We propose a novel learning based depth map prior using Generative Adversarial Networks (GANs) to improve the depth map prediction results. We evaluate the performance of our method on the KITTI dataset and on a private dataset of subsea inspection videos. We show that our method outperforms state of the art SLAM methods in both depth prediction and pose estimation tasks. In particular, our method achieves a mean Absolute Trajectory Error of 1.6 feet in our private subsea test dataset.
ROV localization is essential for subsea operations and is commonly achieved by using acoustic sensors such as ultra-short baseline (USBL) and long-baseline (LBL). These systems are costly and prone to errors in specific scenarios, for example: operations in shallow waters or in proximity to subsea structures. In these scenarios, the operator may experience shadow areas where the accuracy will be compromised or position estimation is not possible at all. Since a significant number of subsea inspection operations are performed under these circumstances, we propose a solution to estimate the movement of the ROV from its live video feed, providing a real-time estimation for ROV position in any scenario. The neural network returns motion estimation and orientation, acting as an inertial navigation system that is able to combine with correct, or replace the acoustic sensors position estimation. The solution proposed in this paper was tested in a real subsea operation where acoustic sensors were not accurate. We describe the use case and how, with our solution and an in-house simulator, we were able to monitor the operation and create a replay of the entire mission in a simulated environment. We show that our solution is effective at estimating the ROV trajectory during the entire operation. Our method can be used to improve the accuracy of acoustic sensors and to replace them in situations where they are not able to operate, and, later on, we can see the entire operation in our digital twin.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.