2023
DOI: 10.48550/arxiv.2302.01762
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

BackdoorBox: A Python Toolbox for Backdoor Learning

Abstract: Third-party resources (e.g., samples, backbones, and pre-trained models) are usually involved in the training of deep neural networks (DNNs), which brings backdoor attacks as a new training-phase threat. In general, backdoor attackers intend to implant hidden backdoor in DNNs, so that the attacked DNNs behave normally on benign samples whereas their predictions will be maliciously changed to a predefined target label if hidden backdoors are activated by attacker-specified trigger patterns. To facilitate the re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…For each attack, we randomly select the target label and inject sufficient poisoned samples to ensure the attack success rate ≥ 98% while preserving the overall model performance. We implement these attacks based on the open-sourced backdoor toolbox (Li et al, 2023). We demonstrate the trigger patterns of adopted attacks for Tiny ImageNet in Figure 5.…”
Section: Main Settingsmentioning
confidence: 99%
See 2 more Smart Citations
“…For each attack, we randomly select the target label and inject sufficient poisoned samples to ensure the attack success rate ≥ 98% while preserving the overall model performance. We implement these attacks based on the open-sourced backdoor toolbox (Li et al, 2023). We demonstrate the trigger patterns of adopted attacks for Tiny ImageNet in Figure 5.…”
Section: Main Settingsmentioning
confidence: 99%
“…Specifically, the trigger for BadNets is a 4 × 4 square consisting of random pixel values; the trigger of ISSBA is generated via DNN-based image steganography (Tancik et al, 2020). Both attacks are implemented via BackdoorBox (Li et al, 2023).…”
Section: B the Detailed Configurations Of The Empirical Studymentioning
confidence: 99%
See 1 more Smart Citation
“…door attack (Gudibande et al;Gao et al 2023;Li et al 2023) has also received great attention since they pose potential threats to DNN-based applications. Backdoor attacks plant a backdoor into a victim model by injecting a trigger pattern into a small subset of training sample (Li et al 2022b;Gu et al 2019;Chen et al 2017).…”
Section: Introductionmentioning
confidence: 99%