2022
DOI: 10.1016/j.jneumeth.2021.109425
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection Using Extreme Gradient Boosting Bayesian Optimization to upgrade the Classification Performance of Motor Imagery signals for BCI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(8 citation statements)
references
References 62 publications
0
6
0
Order By: Relevance
“…Furthermore, when targeting mobile EEG devices, removing movement artifacts can lead to better EEG signal quality and improved performance [ 46 ]. Another possible improvement could be to tailor the proposed decoder to individual subjects by using some form of automatic hyperparameter tuning—for example, applying Bayesian optimization for feature selection [ 47 ] and hyperparameter optimization for the network structures [ 48 ]. Moreover, other BCI paradigms might lead to more user-friendly designs; for example, one could use imagined speech to switch between walking, rotation, and standing, while using IM for rotating only.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, when targeting mobile EEG devices, removing movement artifacts can lead to better EEG signal quality and improved performance [ 46 ]. Another possible improvement could be to tailor the proposed decoder to individual subjects by using some form of automatic hyperparameter tuning—for example, applying Bayesian optimization for feature selection [ 47 ] and hyperparameter optimization for the network structures [ 48 ]. Moreover, other BCI paradigms might lead to more user-friendly designs; for example, one could use imagined speech to switch between walking, rotation, and standing, while using IM for rotating only.…”
Section: Discussionmentioning
confidence: 99%
“…This operation reduces the memory usage. (10) where includes the key operations of multihead ProbSparse self-attention and attention block, is the 1D convolution on the time series, is the activation function, and is the maximum-pooling layer with a step length of 2. With the increase in the number of encoder layers after the self-attention distilling operation, the cumulative time-series length at each layer was half of that in the upper layer; this restricted the data volume of the model parameters and reduced the demand for video memory of the GPU.…”
Section: Figure 5 Eeg Generated In Different Parts Of the Brain Is Su...mentioning
confidence: 99%
“…Another method is motor imagery (MI), in which users imagine limb motions to input data into the BCI. Thenmozhi et al improved the classification performance of MI signals for BCI [10]. Attallah et al used MI-BCI to assist people with limb motor disabilities by enabling them to control assistive devices through their brain signals [11].…”
Section: Introductionmentioning
confidence: 99%
“…Filters focused on small data in classification to improve performance [34]. Some studies used fast fourier transform (FFT) [35]. In comparison, other studies use Wavelet [31].…”
Section: Introductionmentioning
confidence: 99%