Based on analysis of state-of-the-art research investigating target detection and recognition in turbid waters, and aiming to solve the problems encountered during target detection and the unique influences of turbidity areas, in this review, the main problem is divided into two areas: image degradation caused by the unique conditions of turbid water, and target recognition. Existing target recognition methods are divided into three modules: target detection based on deep learning methods, underwater image restoration and enhancement approaches, and underwater image processing methods based on polarization imaging technology and scattering. The relevant research results are analyzed in detail, and methods regarding image processing, target detection, and recognition in turbid water, and relevant datasets are summarized. The main scenarios in which underwater target detection and recognition technology are applied are listed, and the key problems that exist in the current technology are identified. Solutions and development directions are discussed. This work provides a reference for engineering tasks in underwater turbid areas and an outlook on the development of underwater intelligent sensing technology in the future.
This paper proposes a method that combines the style transfer technique and the learned descriptor to enhance the matching performances of underwater sonar images. In the field of underwater vision, sonar is currently the most effective long-distance detection sensor, it has excellent performances in map building and target search tasks. However, the traditional image matching algorithms are all developed based on optical images. In order to solve this contradiction, the style transfer method is used to convert the sonar images into optical styles, and at the same time, the learned descriptor with excellent expressiveness for sonar images matching is introduced. Experiments show that this method significantly enhances the matching quality of sonar images. In addition, it also provides new ideas for the preprocessing of underwater sonar images by using the style transfer approach.
In the field of underwater vision, image matching between the main two sensors (sonar and optical camera) has always been a challenging problem. The independent imaging mechanism of the two determines the modalities of the image, and the local features of the images under various modalities are significantly different, which makes the general matching method based on the optical image invalid. In order to make full use of underwater acoustic and optical images, and promote the development of multisensor information fusion (MSIF) technology, this letter proposes to apply an image attribute transfer algorithm and advanced local feature descriptor to solve the problem of underwater acousto-optic image matching. We utilize real and simulated underwater images for testing; experimental results show that our proposed method could effectively preprocess these multimodal images to obtain an accurate matching result, thus providing a new solution for the underwater multisensor image matching task.
Image feature matching is essential in many computer vision applications, and the foundation of matching is feature detection, which is a crucial feature quantification process. This manuscript focused on detecting more features from underwater acoustic imageries for further ocean engineering applications of autonomous underwater vehicles (AUVs). Currently, the mainstream feature detection operators are developed for optical images, and there is not yet a feature detector oriented to underwater acoustic imagery. To better analyze the suitability of existing feature detectors for acoustic imagery and develop an operator that can robustly detect feature points in underwater imageries in the future, this manuscript compared the performance of well-established handcrafted feature detectors and that of the increasingly popular deep-learning-based detectors to fill the gap in the literature. The datasets tested are from the most commonly used side-scan sonars (SSSs) and forward-looking sonars (FLSs). Additionally, the detection idea of these detectors on the acoustic imagery phase congruency (PC) layer was innovatively proposed with the aim of finding a solution that balances detection accuracy and speed. The experimental results show that the ORB (Oriented FAST and Rotated BRIEF) and BRISK (Binary Robust Invariant Scalable Keypoints) detectors achieve the best overall performance, the FAST detector is the fastest, and the PC and Sobel layers are the most favorable for implementing feature detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.