2020
DOI: 10.48550/arxiv.2005.03141
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Frequency-Based Explanation for Robust CNN

Abstract: Current explanation techniques towards a transparent Convolutional Neural Network (CNN) mainly focuses on building connections between the human-understandable input features with models' prediction, overlooking an alternative representation of the input, the frequency components decomposition. In this work, we present an analysis of the connection between the distribution of frequency components in the input dataset and the reasoning process the model learns from the data. We further provide quantification an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…Instead the authors propose the texture hypothesis, stating that an inherent texturebias in the dataset can lead to a lack of robustness in CNNs. Similarly, other works [32] have reported a higher importance of texture-like high-frequency input features, which aligns with the vulnerability to high-frequency adversarial attacks [30]. Through expensive modification of the training dataset, exchanging the dataset's texture-bias to a shape-bias, the authors of [9] achieve improved classification robustness.…”
Section: Introductionmentioning
confidence: 56%
“…Instead the authors propose the texture hypothesis, stating that an inherent texturebias in the dataset can lead to a lack of robustness in CNNs. Similarly, other works [32] have reported a higher importance of texture-like high-frequency input features, which aligns with the vulnerability to high-frequency adversarial attacks [30]. Through expensive modification of the training dataset, exchanging the dataset's texture-bias to a shape-bias, the authors of [9] achieve improved classification robustness.…”
Section: Introductionmentioning
confidence: 56%
“…FcaNet [76] generalized the pre-processing of channel attention mechanism in the frequency domain. Wang et al, [77] applied the analysis of the connection between the distribution of frequency components in the input dataset to conduct the explanation of the CNN. Since the convolution operation in the spatial domain has been proven to be equivalent to the multiplication in the frequency domain [78], [79] performed video knowledge distillation in the frequency domain for action recognition.…”
Section: Frequency Domain Learningmentioning
confidence: 99%
“…Contrastive view from high-frequency component Next, we use the high-frequency component (HFC) of data as another additional contrastive view. The rationale arises from the facts that 1) learning over HFC of data is a main cause of achieving superior generalization ability [52] and 2) an adversary typically concentrates on HFC when manipulating an example to fool model's decision [53]. Let F and F −1 denote Fourier transformation and its inverse.…”
Section: Contrastive View From Adversarial Examplementioning
confidence: 99%