Based on analysis of state-of-the-art research investigating target detection and recognition in turbid waters, and aiming to solve the problems encountered during target detection and the unique influences of turbidity areas, in this review, the main problem is divided into two areas: image degradation caused by the unique conditions of turbid water, and target recognition. Existing target recognition methods are divided into three modules: target detection based on deep learning methods, underwater image restoration and enhancement approaches, and underwater image processing methods based on polarization imaging technology and scattering. The relevant research results are analyzed in detail, and methods regarding image processing, target detection, and recognition in turbid water, and relevant datasets are summarized. The main scenarios in which underwater target detection and recognition technology are applied are listed, and the key problems that exist in the current technology are identified. Solutions and development directions are discussed. This work provides a reference for engineering tasks in underwater turbid areas and an outlook on the development of underwater intelligent sensing technology in the future.
This paper proposes a method that combines the style transfer technique and the learned descriptor to enhance the matching performances of underwater sonar images. In the field of underwater vision, sonar is currently the most effective long-distance detection sensor, it has excellent performances in map building and target search tasks. However, the traditional image matching algorithms are all developed based on optical images. In order to solve this contradiction, the style transfer method is used to convert the sonar images into optical styles, and at the same time, the learned descriptor with excellent expressiveness for sonar images matching is introduced. Experiments show that this method significantly enhances the matching quality of sonar images. In addition, it also provides new ideas for the preprocessing of underwater sonar images by using the style transfer approach.
In the field of underwater vision, image matching between the main two sensors (sonar and optical camera) has always been a challenging problem. The independent imaging mechanism of the two determines the modalities of the image, and the local features of the images under various modalities are significantly different, which makes the general matching method based on the optical image invalid. In order to make full use of underwater acoustic and optical images, and promote the development of multisensor information fusion (MSIF) technology, this letter proposes to apply an image attribute transfer algorithm and advanced local feature descriptor to solve the problem of underwater acousto-optic image matching. We utilize real and simulated underwater images for testing; experimental results show that our proposed method could effectively preprocess these multimodal images to obtain an accurate matching result, thus providing a new solution for the underwater multisensor image matching task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.