SAR is very useful in both military and civilian applications due to its 24/7, all-weather, and high-resolution capabilities, as well as its ability to recognize camouflage and penetrating cover. In the field of SAR image interpretation, target recognition is an important research challenge for researchers all over the world. With the application of high-resolution SAR, the imaging area has been expanding, and different imaging modes have appeared one after another. There are many difficulties with the conventional understanding of human interpretation. There are issues like slow movement, a lot of labor, and poor judgment. Technology for intelligent interpretation needs to be developed immediately. Although deep CNNs have proven extremely efficient in image recognition, one of the major drawbacks is that they require more parameters as their layers increase. The cost of convolution operation for all convolutional layers is therefore high, and learning lag results from the inevitable rise in computation as the size of the image kernel grows. This study proposes a three ways input of SAR images into multi-stream fast Fourier convolutional neural network (MS-FFCNN). The technique elaborates on the transformation of rudimentary multi-stream convolution neural network into multi-stream fast Fourier convolution neural network. By utilizing the fast Fourier transformation instead of the standard convolution, it lowers the cost of image convolution in convolutional neural networks (CNNs), which lowers the overall computational cost. The multiple streams of FFCNN overcome the problem of insufficient samples size and further improve on the long training time as well as improving the recognition accuracy. The proposed method yielded good recognition accuracy of 99.92%.
Synthetic Aperture Radar (SAR) target classification is one of the largest branches of SAR image analysis. Despite the remarkable achievements of deep learning-based SAR target prediction algorithms, current object recognition algorithms are limited in terms of military applications. Acquisition and labeling of SAR target images are time-consuming and cumbersome. Obtaining adequate training data is also challenging in many cases. Deep learning-based models are always susceptible to overfitting because of insufficient training data. This limitation prevents them from being widely used to classify SAR targets. To overcome the problem of insufficient sampling and to learn more accurate representations for SAR image recognition, we propose a two way input of SAR images into a dual stream of DCNN. Concatenating two input SAR images representations was done using the restricted raw SAR data in order to extract the integral features from the 2 input SAR images representations for classification. The proposed methodology addressed the problem of insufficient sample in SAR target classification and improved classification accuracy without overfitting. Experimental results confirmed that the proposed method is effective in addressing the problem of insufficient sample in SAR target classification. This technique can be integrated into any SAR classification model based on convolutional neural networks (CNNs). The model MCDS-CNN results in a 99.1% recognition accuracy. Despite the limited availability of SAR image data from MSTAR, this approach provides good recognition results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.