2022
DOI: 10.1007/978-3-031-19818-2_18
|View full text |Cite
|
Sign up to set email alerts
|

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…It is a way to generate such examples through various techniques. It can be divided into test-time adversarial attack methods (Goodfellow, Shlens, and Szegedy 2015;Carlini and Wagner 2017;Madry et al 2018;Gu et al 2022;Agnihotri and Keuper 2023) and training-time ones (Feng, Cai, and Zhou 2019). The former generates adversarial images during inference and confuses the model to produce false predictions.…”
Section: Adversarial Attackmentioning
confidence: 99%
“…It is a way to generate such examples through various techniques. It can be divided into test-time adversarial attack methods (Goodfellow, Shlens, and Szegedy 2015;Carlini and Wagner 2017;Madry et al 2018;Gu et al 2022;Agnihotri and Keuper 2023) and training-time ones (Feng, Cai, and Zhou 2019). The former generates adversarial images during inference and confuses the model to produce false predictions.…”
Section: Adversarial Attackmentioning
confidence: 99%
“…Instead of the traditional cross-entropy loss, [54] designed a loss function that can attack image regions far from the patch, which contains several separate loss terms that do not contain the patch pixels, with the aim of gradually shifting the deception focus from increasing the number of misclassified pixels to increasing the antagonistic strength of the patch on the misclassified pixels to improve the attacker's ability to induce pixel misclassification., and the paper also validates the effectiveness of the scenario-specific attack. [55] proposed a segmentation attack method called "segPGD", and the experimental results showed that the convergence was faster and better than that of PGD.…”
Section: Adversarial Attacks In Semantic Segmentationmentioning
confidence: 99%
“…Paik, Kwak, and Kim (2019) observed that increasing the depth of various CapsNet variants did not improve accuracy, and routing algorithms, the core components of capsule implementations, do not provide any benefit regarding accuracy in image classification. Michels et al (2019), andGu, Wu, and showed that CapsNets can be as easily fooled as ConvNets when it comes to adversarial attacks. Gu, Tresp, and Hu (2021) showed that the individual parts of the CapsNet have contradictory effects on the performance on different tasks and conclude that with the right baseline, CapsNets are not generally superior to ConvNets.…”
Section: Related Workmentioning
confidence: 99%