The core of underwater acoustic recognition is to extract the spectral features of targets. The running speed and track of the targets usually result in a Doppler shift, which poses significant challenges for recognizing targets with different Doppler frequencies. This paper proposes deep learning with a channel attention mechanism approach for underwater acoustic recognition. It is based on three crucial designs. Feature structures can obtain high-dimensional underwater acoustic data. The feature extraction model is the most important. First, we develop a ResNet to extract the deep abstraction spectral features of the targets. Then, the channel attention mechanism is introduced in the camResNet to enhance the energy of stable spectral features of residual convolution. This is conducive to subtly represent the inherent characteristics of the targets. Moreover, a feature classification approach based on one-dimensional convolution is applied to recognize targets. We evaluate our approach on challenging data containing four kinds of underwater acoustic targets with different working conditions. Our experiments show that the proposed approach achieves the best recognition accuracy (98.2%) compared with the other approaches. Moreover, the proposed approach is better than the ResNet with a widely used channel attention mechanism for data with different working conditions.
Long-range underwater targets must be accurately and quickly identified for both defense and civil purposes. However, the performance of an underwater acoustic target recognition (UATR) system can be significantly affected by factors such as lack of data and ship working conditions. As the marine environment is very complex, UATR relies heavily on feature engineering, and manually extracted features are occasionally ineffective in the statistical model. In this paper, an end-to-end model of UATR based on a convolutional neural network and attention mechanism is proposed. Using raw time domain data as input, the network model combines residual neural networks and densely connected convolutional neural networks to take full advantage of both. Based on this, a channel attention mechanism and a temporal attention mechanism are added to extract the information in the channel dimension and the temporal dimension. After testing the measured four types of ship-radiated noise dataset in experiments, the results show that the proposed method achieves the highest correct recognition rate of 97.69% under different working conditions and outperforms other deep learning methods.
The target spectrum, which is commonly used in feature extraction for underwater acoustic target classification, can be improperly recovered via conventional beamformer (CBF) owing to its frequency-variant spatial response and lead to degraded classification performance. In this paper, we propose a target spectrum reconstruction method under a sparse Bayesian learning framework with joint sparsity priors that can not only achieve high-resolution target separation in the angular domain but also attain beamwidth constancy over a frequency range at no cost of reducing angular resolution. Experiments on real measured array data show the recovered spectrum via our proposed method can effectively suppress interference and preserve more detailed spectral structures than CBF. This indicates our method is more suitable for target classification because it has the capability of retaining more representative and discriminative characteristics. Moreover, due to target motion and the underwater channel effect, the frequency of prominent spectral line components can be shifted over time, which is harmful to classification performance. To overcome this problem, we proposed a frequency shift-invariant feature extraction method with the help of elaborately designed frequency shift-invariant filter banks. The classification experiments demonstrate that our proposed methods outperform traditional CBF and Mel-frequency features and can help improve underwater recognition performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.