2023
DOI: 10.1007/s11432-022-3548-2
|View full text |Cite
|
Sign up to set email alerts
|

Active poisoning: efficient backdoor attacks on transfer learning-based brain-computer interfaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 47 publications
0
2
0
Order By: Relevance
“…When this poisoned data is used in TL training, the backdoor is successfully set. As a result, it reached an average of around 90% attack success rate using various model architectures and datasets [71]. Another backdoor attack model was proposed by Shuo Wang et al, in which the goal was to defeat defenses such as pruning, retraining, and input preprocessing.…”
Section: Attacks On Transfer Learningmentioning
confidence: 99%
“…When this poisoned data is used in TL training, the backdoor is successfully set. As a result, it reached an average of around 90% attack success rate using various model architectures and datasets [71]. Another backdoor attack model was proposed by Shuo Wang et al, in which the goal was to defeat defenses such as pruning, retraining, and input preprocessing.…”
Section: Attacks On Transfer Learningmentioning
confidence: 99%
“…These attacks exploit the model's sensitivity to minor perturbations in input data. Jiang et al, (2023) explored and understood the potential risks associated with backdoor attacksaboutTL for EEG-based BCIs. This involves assessing the vulnerabilities that can be introduced when transferring models trained on poisoned data.…”
Section: Backdoor and Adversarial Attacks On Image Security Using Tra...mentioning
confidence: 99%