2021
DOI: 10.1109/jsen.2020.3044314
|View full text |Cite
|
Sign up to set email alerts
|

Polarimetric HRRP Recognition Based on ConvLSTM With Self-Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(7 citation statements)
references
References 61 publications
0
7
0
Order By: Relevance
“…The MSGF‐1D‐CNN is configured with the following parameters: the window size and step size of the max‐pooling are both set to 3, and the stride of all convolution kernels is set to 1. In addition to comparing MSGF‐1D‐CNN with standard 1D‐CNN to verify the improvement, MSGF‐1D‐CNN is also compared with the stacked denoising sparse AE (sDSAE) [31], Bidirectional LSTM (Bi‐LSTM) [1], Bidirectional GRU (Bi‐GRU) [32], one‐dimensional local receptive field‐based extreme learning auto‐encoder (ELM‐LRF‐AE) [2], convolutional long short‐term memory (ConvLSTM) [33] and one‐dimensional convolutional neural network with channel attention (CNN1D‐CA) [15]. All the above DNN‐based HRRP recognition methods take advantage of the diffGrad algorithm [30] for training.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…The MSGF‐1D‐CNN is configured with the following parameters: the window size and step size of the max‐pooling are both set to 3, and the stride of all convolution kernels is set to 1. In addition to comparing MSGF‐1D‐CNN with standard 1D‐CNN to verify the improvement, MSGF‐1D‐CNN is also compared with the stacked denoising sparse AE (sDSAE) [31], Bidirectional LSTM (Bi‐LSTM) [1], Bidirectional GRU (Bi‐GRU) [32], one‐dimensional local receptive field‐based extreme learning auto‐encoder (ELM‐LRF‐AE) [2], convolutional long short‐term memory (ConvLSTM) [33] and one‐dimensional convolutional neural network with channel attention (CNN1D‐CA) [15]. All the above DNN‐based HRRP recognition methods take advantage of the diffGrad algorithm [30] for training.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…Commonly used sensitivity removal approaches include time-shift compensation, energy normalization, and average processing [54]- [56]. The DNN structures used for radar HRRP target recognition include the deep belief network [54], [55], recurrent attentional network [57], [58], concatenated neural network, CNNs [62]- [64], stacked auto-encoder (SAE) [65], and convolutional LSTM [66], [67].…”
Section: A Dl-based Atr Using Hrr Profilesmentioning
confidence: 99%
“…Some researchers used measured HRRP data for performance evaluation. For example, the HRRP data from Yak-42 (large jet), Cessna Citation S/II (small jet), and An-26 (twin-engine turboprop) were used in [54]- [58]; the HRRP data from Airbus A319, A320, A321, and Boeing B738 were used in [59]; the HRRP data from seven types of ship of different sizes (length from 89.3 m to 182.8 m) were used in [60]; the HRRP data from various types of ground vehicles were used in [62], [66], [67]. Since most researchers only have access to a limited mount of HRRP measurement data associated with a handful of vehicles, many of them resort to simulated HRRP data generated by software based on the specific CAD models of vehicles for research purposes.…”
Section: A Dl-based Atr Using Hrr Profilesmentioning
confidence: 99%
“…The major concerns of RTR mainly relate to data acquisition, semantic feature discovery, and extraction methods. High-resolution signals like high-resolution range profile (HRRP) [1], [2], synthetic aperture radar (SAR) images [3], and inverse SAR (ISAR) images [4], [5] can present rich information [6] of targets but demand powerful radar. The radar cross section (RCS) signal, which characterizes the scattering shape and the movement pattern of the target, is widely used due to its easy availability and sufficient information for RTR.…”
Section: Introductionmentioning
confidence: 99%