2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.02021
|View full text |Cite
|
Sign up to set email alerts
|

FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis

Abstract: Deep learning-based face restoration models, increasingly prevalent in smart devices, have become targets for sophisticated backdoor attacks. These attacks, through subtle trigger injection into input face images, can lead to unexpected restoration outcomes. Unlike conventional methods focused on classification tasks, our approach introduces a unique degradation objective tailored for attacking restoration models. Moreover, we propose the Adaptive Selective Frequency Injection Backdoor Attack (AS-FIBA) framewo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 60 publications
(15 citation statements)
references
References 59 publications
(51 reference statements)
0
15
0
Order By: Relevance
“…The poisoned model behaves normally on benign instances, but its predictions change consistently to the attacker's desired target class when a particular trigger is used to activate the injected backdoor. Most backdoor injections (Gu, Dolan-Gavitt, and Garg 2017;Chen et al 2017;Barni, Kallas, and Tondi 2019;Salem et al 2022;Nguyen and Tran 2021;Zhang et al 2022;Li et al 2021a;Lin et al 2020;Wang et al 2021;Feng et al 2022) occur during the training process, where the attacker contributes a set of training data embedded with a particular trigger pattern. As a result, the compromised model exhibits the backdoor behavior when the same trigger pattern presents in the testing stage.…”
Section: Background and Related Workmentioning
confidence: 99%
“…The poisoned model behaves normally on benign instances, but its predictions change consistently to the attacker's desired target class when a particular trigger is used to activate the injected backdoor. Most backdoor injections (Gu, Dolan-Gavitt, and Garg 2017;Chen et al 2017;Barni, Kallas, and Tondi 2019;Salem et al 2022;Nguyen and Tran 2021;Zhang et al 2022;Li et al 2021a;Lin et al 2020;Wang et al 2021;Feng et al 2022) occur during the training process, where the attacker contributes a set of training data embedded with a particular trigger pattern. As a result, the compromised model exhibits the backdoor behavior when the same trigger pattern presents in the testing stage.…”
Section: Background and Related Workmentioning
confidence: 99%
“…The objective of a backdoor attack on WFL is to mislead the global model to misclassify malicious inputs with injected backdoor patterns into the target output, with application to image classification [127]- [136], word prediction [137]- [142], etc. For instance, regarding the image classification task discussed in [33], an attacker aims to mislead the global model to classify images of "truck" as "airplane".…”
Section: E Evaluation Metrics For Backdoor Attacksmentioning
confidence: 99%
“…Recently, advanced backdoor attacks have made them harder to detect by improving the trigger designs or leveraging stealthier backdoor implanting methods. Some of these methods involve the use of feature-space triggers or more complex triggers, such as frequency-domain backdoor (Feng et al 2022), latent-space backdoor (Yao et al 2019), blend backdoor (Li et al 2021), reflection backdoor (Liu et al 2020), and the composite backdoor (Lin et al 2020), etc. Different from small patches, these triggers are difficult to be reversed as small-area patterns and, thus, pose challenges in distinguishing them from universal AEs.…”
Section: Introductionmentioning
confidence: 99%