2019
DOI: 10.1109/tnsre.2019.2908955
|View full text |Cite
|
Sign up to set email alerts
|

On the Vulnerability of CNN Classifiers in EEG-Based BCIs

Abstract: Deep learning has been successfully used in numerous applications because of its outstanding performance and the ability to avoid manual feature engineering. One such application is electroencephalogram (EEG) based brain-computer interface (BCI), where multiple convolutional neural network (CNN) models have been proposed for EEG classification. However, it has been found that deep learning models can be easily fooled with adversarial examples, which are normal examples with small deliberate perturbations. This… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
81
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 93 publications
(81 citation statements)
references
References 30 publications
0
81
0
Order By: Relevance
“…By alternatively augmenting the training set and updating the substitute model, it can gradually approximate the target model. Recently, Zhang and Wu [22] extended this idea to EEG-based BCIs, but their approach was slightly different: they synthesized a new training set by using the loss computed from the inputs, instead of the labels from the target model, to calculate the Jacobian matrix.…”
Section: A Black-box Attacksmentioning
confidence: 99%
See 3 more Smart Citations
“…By alternatively augmenting the training set and updating the substitute model, it can gradually approximate the target model. Recently, Zhang and Wu [22] extended this idea to EEG-based BCIs, but their approach was slightly different: they synthesized a new training set by using the loss computed from the inputs, instead of the labels from the target model, to calculate the Jacobian matrix.…”
Section: A Black-box Attacksmentioning
confidence: 99%
“…The attack framework in this paper is the same as our previous work [22], where the attackers can add adversarial perturbations before the machine learning modules. Let x i ∈ R C×T be the i-th raw EEG epoch (i = 1, ..., n), where C is the number of EEG channels and T the number of the time domain samples.…”
Section: A Attack Settingmentioning
confidence: 99%
See 2 more Smart Citations
“…In speech recognition, adversarial examples can generate audio that sounds meaningless to a human, but be understood as a meaningful voice command by a smart phone [2]. Our recent work [25] also showed that adversarial examples can dramatically degrade the classification accuracy of EEG-based BCIs.…”
Section: Introductionmentioning
confidence: 99%