2022
DOI: 10.48550/arxiv.2207.07803
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Masked Spatial-Spectral Autoencoders Are Excellent Hyperspectral Defenders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The authors in [11] use their Masked Spatial-Spectral Autoencoder (MSSA) which consists of masked sequence attention learning, dynamic graph embedding, and self-supervised re- construction. Rather than focusing on semantic segmentation via network architecture, some works like [12,13] try to use the rich spectral information to robustify the entire process. In [12], they propose a spectral sampling and shape encoding to increase adversarial robustness as a preprocessing step to traditional per-pixel classification via random sampling.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors in [11] use their Masked Spatial-Spectral Autoencoder (MSSA) which consists of masked sequence attention learning, dynamic graph embedding, and self-supervised re- construction. Rather than focusing on semantic segmentation via network architecture, some works like [12,13] try to use the rich spectral information to robustify the entire process. In [12], they propose a spectral sampling and shape encoding to increase adversarial robustness as a preprocessing step to traditional per-pixel classification via random sampling.…”
Section: Related Workmentioning
confidence: 99%
“…In [12], they propose a spectral sampling and shape encoding to increase adversarial robustness as a preprocessing step to traditional per-pixel classification via random sampling. In work [13], rather than random sampling, they use autoencoders to reconstruct the spectral signature of pixels for later classification with a shared global loss function. However, all these approaches focus on a single attack at a time and do not explore network robustness in the presence of multiple attacks.…”
Section: Related Workmentioning
confidence: 99%