2021
DOI: 10.3390/rs13204158
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Self-Supervised Learning for Robust SAR Target Recognition

Abstract: Synthetic aperture radar (SAR) can perform observations at all times and has been widely used in the military field. Deep neural network (DNN)-based SAR target recognition models have achieved great success in recent years. Yet, the adversarial robustness of these models has received far less academic attention in the remote sensing community. In this article, we first present a comprehensive adversarial robustness evaluation framework for DNN-based SAR target recognition. Both data-oriented metrics and model-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 22 publications
0
11
0
Order By: Relevance
“…This method used only unlabeled bitemporal images of objects, adopting deep clustering and contrastive learning methods to train the network in a self-supervised manner. In terms of target detection, studies [24][25][26] have effectively reduced the number of labeled samples required for target detection through contrastive SSL. In addition, Manas and Lacoste [27] introduced seasonal contrast (SeCo), a method that enhanced contrastive learning by treating images of the same location at different times as similar pairs.…”
Section: A Contrastive Ssl Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This method used only unlabeled bitemporal images of objects, adopting deep clustering and contrastive learning methods to train the network in a self-supervised manner. In terms of target detection, studies [24][25][26] have effectively reduced the number of labeled samples required for target detection through contrastive SSL. In addition, Manas and Lacoste [27] introduced seasonal contrast (SeCo), a method that enhanced contrastive learning by treating images of the same location at different times as similar pairs.…”
Section: A Contrastive Ssl Methodsmentioning
confidence: 99%
“…SSL can effectively alleviate the high dependence on annotated samples and has become a research hotspot in the field of remote sensing. SSL is widely used in remotesensing scene classification [2][3][4][5][6][7][8][9], image classification [10][11][12][13][14][15][16][17][18], semantic segmentation [19][20][21][22], change detection [23], and target recognition [24,25]. At present, there are two main representative SSL schemes, namely contrastive and generative SSL methods.…”
Section: Related Workmentioning
confidence: 99%
“…Wang et al [30] introduced a pseudolabeled few-shot SAR image classification method, employing a dual network and cross-training strategy. Xu et al [31] introduced dadversarial self-supervised learning to SAR target recognition to maximize the similarity between SAR images enhanced by data and their adversarial examples. These studies collectively illustrate the feasibility of SSL techniques in the domain of SAR target recognition.…”
Section: Related Work a Self Supervised Learningmentioning
confidence: 99%
“…Consequently, Tao, C. et al [62] carefully designed a sample collection strategy to automatically capture unlabeled samples with class-balanced resampling both in natural and remote sensing scenes, and then employed these samples on the pretext task of contrastive learning to make the different augmented views (i.e., positive sample pairs) of the same images closer and separate views (i.e., negative sample pairs) of different images. Next, for different pretext task settings, Xu, Y. et al [63] designed a novel unsupervised adversarial contrastive learning method to pre-train a CNN-based Siamese network, which minimized the feature similarity of augmented data and its corresponding unsupervised adversarial samples. Through the designed pretext task, [63] obtained competitive classification results on SAR target recognition datasets.…”
Section: Self-supervised Pre-trainingmentioning
confidence: 99%
“…Next, for different pretext task settings, Xu, Y. et al [63] designed a novel unsupervised adversarial contrastive learning method to pre-train a CNN-based Siamese network, which minimized the feature similarity of augmented data and its corresponding unsupervised adversarial samples. Through the designed pretext task, [63] obtained competitive classification results on SAR target recognition datasets. In addition, to use prior information assisting pretext task setting in self-supervised pre-training, Ayush, K. et al [64] introduced the geography-aware into the pretext task of invariant representation learning, specifically, it makes the positive pairs closer than typical unrelated negative pairs and meanwhile predicts the geo-location information of input images.…”
Section: Self-supervised Pre-trainingmentioning
confidence: 99%