2022
DOI: 10.1155/2022/8110695
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Multiscale Feature Fusion Method for Underwater Target Recognition

Abstract: The feature information of small-scale targets is seriously missing under the interference of complex underwater terrain and light refraction. Moreover, the unbalanced distribution of underwater target samples can also affect the accuracy of spatial semantic feature extraction. Aiming at the above problems, this paper proposes a dynamic multiscale feature fusion method for underwater target recognition. Firstly, this paper uses multiscale info noise contrastive estimation (MS-InfoNCE) loss to extract the signi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…Ye et al (2022) introduced a fusion multi-scale attention mechanism network to address boundary ambiguity, utilizing a feature refinement compensation module to minimize inter-class disparities. Cai et al (2022a) proposed a dynamic multi-scale feature fusion method for underwater target recognition. introduced MLLE, an effective method for enhancing underwater images.…”
Section: Multi-scale Feature Extractionmentioning
confidence: 99%
“…Ye et al (2022) introduced a fusion multi-scale attention mechanism network to address boundary ambiguity, utilizing a feature refinement compensation module to minimize inter-class disparities. Cai et al (2022a) proposed a dynamic multi-scale feature fusion method for underwater target recognition. introduced MLLE, an effective method for enhancing underwater images.…”
Section: Multi-scale Feature Extractionmentioning
confidence: 99%
“…In [20], Gao et al used a single image to calibrate the camera through a specially designed checkerboard. Cai et al [21] proposed a dynamic multiscale feature fusion method for underwater object recognition, which learned the spatial semantic features of the object through dynamic conditional probability matrix to improve the accuracy of underwater distorted objects. Mozaffari et al [22] proposed a highquality eye-tracking reference frame to improve real-time active eye movement correction system for revisit accuracy between consecutive imaging sessions.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the influence of complex sea conditions and refraction of light transmission, the images obtained by AUV and other underwater vision equipment have weak features such as feature blur, loss and distortion. Most of the existing methods based on deep network solve the problem of weak feature target recognition by image deblurring, feature co-occurrence relationship for feature correction [2], distortion correction [3, 4] and other methods. However, this kind of algorithm can easily lead to the loss of original spatial and semantic information of small targets [5, 6], and increase the relevant modules and computing volume of the algorithm [7].…”
Section: Introductionmentioning
confidence: 99%