The accuracy of low probability of intercept (LPI) radar waveform recognition is an important and challenging problem in electronic warfare. Aiming at the problem of the difficulty in feature extraction and the low recognition rates of the LPI radar signal under a low signal-to-noise ratio, and inspired by the symmetry theory, we propose a new approach for the LPI radar signal recognition method based on a dual-channel convolutional neural network (CNN) and feature fusion. Our new approach contains three main modules: the preprocessing module that converts the LPI radar waveforms into two-dimensional time-frequency images using the Choi–Williams distribution (CWD) transformation and performs image binarization, the feature extraction module that extracts different features obtained from the images, and the recognition module that utilizes a multi-layer perceptron (MLP) network to fuse these features and distinguish the type of LPI radar signals. In the feature extraction module, a two-channel CNN model is proposed that extracts Histogram of Oriented Gradients (HOG) features and deep features from time-frequency images, respectively. Finally, the recognition module recognizes the radar signals using a Softmax classifier based on the fused features from two channels. The experimental results from 12 types of LPI radar signals prove the superiority and robustness of the proposed model. Its overall recognition rate reaches 97% when the signal-to-noise ratio is −6 dB.
Objectives: To improve the recognition accuracy of radar signals under a low signal-to-noise ratio (SNR). Technology or Method: We propose a novel radar signal recognition method based on a dual-channel model with the histogram of oriented gradients (HOG) feature extraction. Specifically, multisynchrosqueezing transform (MSST) and Choi-Williams distribution (CWD) transform are adopted individually to obtain the time-frequency distribution images of radar signals, and HOG feature extraction is performed on the preprocessed time-frequency images of each channel respectively. Then, the features of the two channels are fused and dimensionally reduced by the principal component analysis (PCA). Finally, the compact feature parameters are fed to the support vector machine (SVM) classifier to identify radar signals. Clinical or Biological Impact: The experimental results demonstrate that the proposed model achieves a high recognition performance with a small computational complexity, especially in low SNR. When the SNR is -12 dB, the recognition accuracy can reach more than 92%, which is over 6% higher than that of single-channel models and related CNN-based models.
3D point cloud classification tasks have been a hot topic in recent years. Most existing point cloud processing frameworks lack context-aware features due to the deficiency of sufficient local feature extraction information. Therefore, we design an augmented sampling and grouping (ASG) module to efficiently obtain fine-grained features from the original point cloud. In particular, this method strengthens the domain near each centroid and makes reasonable use of the local mean and global standard deviation to mine point cloud’s local and global features. In addition to this, inspired by the transformer structure UFO-ViT in 2D vision tasks, we first try to use a linearly-normalized attention mechanism in point cloud processing tasks, investigating a novel transformer-based point cloud classification architecture UFO-Net. An effective local feature learning module is adopted as a bridging technique to connect different feature extraction modules. Importantly, UFO-Net employs multiple stacked blocks to better capture feature representation of the point cloud. Extensive ablation experiments on public datasets show that our method outperforms other state-of-the-art methods. For instance, our network performed with 93.7% overall accuracy on the ModelNet40 dataset, which was 0.5% higher than PCT. Our network also archived 83.8% overall accuracy on the ScanObjectNN dataset, which is 3.8% better than PCT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.