2020
DOI: 10.48550/arxiv.2010.05981
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Shape-Texture Debiased Neural Network Training

Abstract: Shape and texture are two prominent and complementary cues for recognizing objects. Nonetheless, Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset. Our ablation shows that such bias degenerates model performance. Motivated by this observation, we develop a simple algorithm for shape-texture debiased learning. To prevent models from exclusively attending on a single cue in representation learning, we augment training data with images with conflicti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
3

Relationship

3
7

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 34 publications
0
14
0
Order By: Relevance
“…In contrast, a texture-biased model is trained if the translated image is labeled as lemon. To balance the bias, the translated image by style transferring is taken with two labels [65], both chimpanzee and lemon, which leads to a de-biased model. Further, inspired by Mixup [38], Hong et al propose…”
Section: Label-changingmentioning
confidence: 99%
“…In contrast, a texture-biased model is trained if the translated image is labeled as lemon. To balance the bias, the translated image by style transferring is taken with two labels [65], both chimpanzee and lemon, which leads to a de-biased model. Further, inspired by Mixup [38], Hong et al propose…”
Section: Label-changingmentioning
confidence: 99%
“…Both (Shafahi et al, 2019) and (Zhang et al, 2019) propose to merge the gradient for adversarial attacks and the gradient for network parameter updates into a single forward and backward pass to reduce computations. Wong et al (Wong et al, 2020) (Chen et al, 2021b;Xu et al, 2021;Shu et al, 2020;Chen et al, 2021a;Gong et al, 2021), under different learning paradigms Ho & Vasconcelos, 2020;Xu & Yang, 2020), with different adversarial data (Merchant et al, 2020;Li et al, 2020;Herrmann et al, 2021), enabling extremely large-batch training (Liu et al, 2022), etc. In this paper, rather than furthering performance, we aim to make AdvProp "free".…”
Section: Related Workmentioning
confidence: 99%
“…The robustness research of CNNs has experienced explosive development in recent years. Numerous works conduct thorough study on the robustness of CNNs and aim to strengthen it in different ways, e.g., stronger data augmentation [14,16,17], carefully designed [18,19] or searched [20,21] network architecture, improved training strategy [22][23][24], quantization [25] and pruning [26] of the weights, better pooling [27,28] or activation functions [29], etc. Although the methods mentioned above are good performed on CNNs, there is no evidence that they also keep the effectiveness on ViTs.…”
Section: Robustness Study For Cnnsmentioning
confidence: 99%