2021
DOI: 10.48550/arxiv.2112.01148
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis

Abstract: In recent years, the security of AI systems has drawn increasing research attention, especially in the medical imaging realm. To develop a secure medical image analysis (MIA) system, it is a must to study possible backdoor attacks (BAs), which can embed hidden malicious behaviors into the system. However, designing a unified BA method that can be applied to various MIA systems is challenging due to the diversity of imaging modalities (e.g., X-Ray, CT, and MRI) and analysis tasks (e.g., classification, detectio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…Previous studies of adversarial attacks in medical imaging have focused on clinical applications where a malicious party would be interested in altering the prediction outcomes for financial or other purposes. Most of these studies implemented evasion attacks [11,10], while a smaller subset used poisoning attacks [19,9]. An equally relevant yet understudied motivation in scientific machine learning is the feasibility of manipulating data to improve model performance falsely.…”
Section: Introductionmentioning
confidence: 99%
“…Previous studies of adversarial attacks in medical imaging have focused on clinical applications where a malicious party would be interested in altering the prediction outcomes for financial or other purposes. Most of these studies implemented evasion attacks [11,10], while a smaller subset used poisoning attacks [19,9]. An equally relevant yet understudied motivation in scientific machine learning is the feasibility of manipulating data to improve model performance falsely.…”
Section: Introductionmentioning
confidence: 99%
“…In the inference process, the backdoored model behaves normally on benign data while its prediction will be maliciously altered when the backdoor is activated. The risk of backdoor attacks hinders the applications of DNNs to some safety-critical areas such as automatic driving [38] and healthcare systems [14].…”
Section: Introductionmentioning
confidence: 99%