2023
DOI: 10.1109/jbhi.2023.3303494
|View full text |Cite
|
Sign up to set email alerts
|

Generative Perturbation Network for Universal Adversarial Attacks on Brain-Computer Interfaces

Abstract: Deep neural networks (DNNs) have successfully classified EEG-based brain-computer interface (BCI) systems. However, recent studies have found that welldesigned input samples, known as adversarial examples, can easily fool well-performed deep neural networks model with minor perturbations undetectable by a human. This paper proposes an efficient generative model named generative perturbation network (GPN), which can generate universal adversarial examples with the same architecture for non-targeted and targeted… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 39 publications
0
1
0
Order By: Relevance
“…Tsai et al [2] conducted a one-pixel attack on various medical image datasets, such as COVID-19, Chest, derma and pneumonia to generate adversarial images that can fool a trained model. It also performed a multi-pixel attack on the COVID-19 dataset to explore the impact of the number of perturbed pixels.…”
Section: Literature Surveymentioning
confidence: 99%
“…Tsai et al [2] conducted a one-pixel attack on various medical image datasets, such as COVID-19, Chest, derma and pneumonia to generate adversarial images that can fool a trained model. It also performed a multi-pixel attack on the COVID-19 dataset to explore the impact of the number of perturbed pixels.…”
Section: Literature Surveymentioning
confidence: 99%