2020
DOI: 10.1109/access.2020.3011639
|View full text |Cite
|
Sign up to set email alerts
|

Attack Selectivity of Adversarial Examples in Remote Sensing Image Scene Classification

Abstract: Remote sensing image (RSI) scene classification is the foundation and important technology of ground object detection, land use management and geographic analysis. During recent years, convolutional neural networks (CNNs) have achieved significant success and are widely applied in RSI scene classification. However, crafted images that serve as adversarial examples can potentially fool CNNs with high confidence and are hard for human eyes to interpret. For the increasing security and robust requirements of RSI … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 59 publications
1
4
0
Order By: Relevance
“…Previous papers have indicated the adversarial vulnerability in DNN models for RSIs, leading to a security problem in modern UAVs. They mainly involve scene classification [53][54][55] and target recognition [56][57][58] for digital attacks, which are consistent with the supposed threat model in this article. Moreover, there are some explorations into adversarial patches that can be printed and applied to a physical scene or target [59][60][61].…”
Section: Adversarial Vulnerability In Dnn-based Uavssupporting
confidence: 76%
“…Previous papers have indicated the adversarial vulnerability in DNN models for RSIs, leading to a security problem in modern UAVs. They mainly involve scene classification [53][54][55] and target recognition [56][57][58] for digital attacks, which are consistent with the supposed threat model in this article. Moreover, there are some explorations into adversarial patches that can be printed and applied to a physical scene or target [59][60][61].…”
Section: Adversarial Vulnerability In Dnn-based Uavssupporting
confidence: 76%
“…The reasons for this phenomenon may be the homogeneity and heterogeneity among categories. As it is found in the work [40] that the misclassified categories of the adversarial examples are more probably to be the categories that are closer to them in the sample's feature space. Meanwhile, it can be observed that the similarity among the SAR images from different categories can be well reflected by the misclassified category distributions.…”
Section: Misclassified Category Distributions Of the Adversarial Attackmentioning
confidence: 78%
“…Figure 18 illustrates the captured physical adversarial examples. [131] [132] Apart from two adversarial patch attacks against the object detector on the aerial imagery dataset, there are a number of attempts to attack the remote sensing image recognition, and detection [133][134][135].…”
Section: Type Onmentioning
confidence: 99%