2019
DOI: 10.1007/978-3-030-36708-4_39
|View full text |Cite
|
Sign up to set email alerts
|

White-Box Target Attack for EEG-Based BCI Regression Problems

Abstract: Machine learning has achieved great success in many applications, including electroencephalogram (EEG) based brain-computer interfaces (BCIs). Unfortunately, many machine learning models are vulnerable to adversarial examples, which are crafted by adding deliberately designed perturbations to the original inputs. Many adversarial attack approaches for classification problems have been proposed, but few have considered target adversarial attacks for regression problems. This paper proposes two such approaches. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 44 publications
(28 citation statements)
references
References 19 publications
0
27
0
1
Order By: Relevance
“…As for the loss function L, we use the loss function provided by Carlini and Wagner [8] for our classification tasks. For our regression models, we substitute the standard CW loss function for a custom loss designed for regression tasks by Meng et al [41]. Projected Gradient Descent (PGD) [39] finds the adversarial example x + δ by solving the following maximization problem:…”
Section: Adversarial Attack Algorithmsmentioning
confidence: 99%
“…As for the loss function L, we use the loss function provided by Carlini and Wagner [8] for our classification tasks. For our regression models, we substitute the standard CW loss function for a custom loss designed for regression tasks by Meng et al [41]. Projected Gradient Descent (PGD) [39] finds the adversarial example x + δ by solving the following maximization problem:…”
Section: Adversarial Attack Algorithmsmentioning
confidence: 99%
“…Adversarial attacks to EEG-based BCIs have been explored in our previous studies 27,28,33,40 . All of them were evasion attacks.…”
Section: Discussionmentioning
confidence: 99%
“…They successfully attacked three convolutional neural network (CNN) classifiers in three different applications [P300 evoked potential detection, feedback error-related negativity (ERN) detection, and motor imagery (MI) classification]. Meng et al 33 further confirmed the existence of adversarial examples in two EEG-based BCI regression problems (driver fatigue estimation, and reaction time estimation in the psychomotor vigilance task), which successfully changed the regression model's prediction by a user-specified amount. More recently, Zhang et al 28 also showed that P300 and steady-state visual evoked potential (SSVEP) based BCI spellers can be easily attacked: a tiny perturbation to the EEG trial can mislead the speller to output any character the attacker wants.…”
mentioning
confidence: 99%
“…Another method, iFGSM [12], strengthens the adversarial attack by iteratively applying the FGSM. One recent study [10] has shown that these two methods can also attack deep learning models for EEG analytics. However, it assumes attacking all channels and all time steps simultaneously and cannot work effectively under sparsity constraints that attacking only a small portion of channels and time steps.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Such wide applications of EEG analytics motivate the investigation of their robustness and reliability. One initial study [10] shows that adversarial attacks from the computer vision domain, such as FGSM [11] and iFGSM [12], can also dramatically change the outputs of EEG models by introducing perturbations on the EEG data. This approach assumes a strong Fig.…”
Section: Introductionmentioning
confidence: 99%