2023
DOI: 10.1016/j.ins.2023.03.124
|View full text |Cite
|
Sign up to set email alerts
|

Clean-label poisoning attack with perturbation causing dominant features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…In [19], ATT_FLAV is introduced, which is a framework that enhances the robustness of federated learning-based autonomous driving models against poisoning attacks by using a bandit-based AttackRegion-UCB algorithm to dynamically choose the target attack label region in each round of training. In [20], scholars introduce a new type of data poisoning attack designed to preserve personal data privacy, that can also be used as a powerful clean-label backdoor attack. The attack operates by adding unnoticeable perturbations to clean data, to confuse DNNs into making incorrect classifications.…”
Section: Z Wang Et Al / An Overview Of Artificial Intelligence Securi...mentioning
confidence: 99%
“…In [19], ATT_FLAV is introduced, which is a framework that enhances the robustness of federated learning-based autonomous driving models against poisoning attacks by using a bandit-based AttackRegion-UCB algorithm to dynamically choose the target attack label region in each round of training. In [20], scholars introduce a new type of data poisoning attack designed to preserve personal data privacy, that can also be used as a powerful clean-label backdoor attack. The attack operates by adding unnoticeable perturbations to clean data, to confuse DNNs into making incorrect classifications.…”
Section: Z Wang Et Al / An Overview Of Artificial Intelligence Securi...mentioning
confidence: 99%