2022
DOI: 10.1016/j.neuroscience.2021.11.045
|View full text |Cite
|
Sign up to set email alerts
|

Functional Complex Networks Based on Operational Architectonics: Application on EEG-based Brain–computer Interface for Imagined Speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 119 publications
0
4
0
Order By: Relevance
“…Several classical machine learning techniques have been used to approach imagined speech decoding for EEG-based BCI systems. Some on the most common algorithms include Linear Discriminant Analysis (LDA) (Chi et al, 2011 ; Song and Sepulveda, 2014 ; Lee et al, 2021b ), Support Vector Machines (SVM) (DaSalla et al, 2009 ; Garćıa et al, 2012 ; Kim et al, 2013 ; Riaz et al, 2014 ; Sarmiento et al, 2014 ; Zhao and Rudzicz, 2015 ; Arjestan et al, 2016 ; González-Castañeda et al, 2017 ; Hashim et al, 2017 ; Cooney et al, 2018 ; Moctezuma and Molinas, 2018 ; Agarwal and Kumar, 2021 ), Random Forests (RF) (González-Castañeda et al, 2017 ; Moctezuma and Molinas, 2018 ; Moctezuma et al, 2019 ), k-Nearest-Neighbors (kNN) (Riaz et al, 2014 ; Bakhshali et al, 2020 ; Agarwal and Kumar, 2021 ; Rao, 2021 ; Dash et al, 2022 ), Naive Bayes (Dash et al, 2020a ; Agarwal and Kumar, 2021 ; Iliopoulos and Papasotiriou, 2021 ; Lee et al, 2021b ), and Relevance Vector Machines (RVM) (Liang et al, 2006 ; Matsumoto and Hori, 2014 ). Furthermore, deep learning approaches have recently taken a huge role for imagined speech recognition.…”
Section: Classification Techniques In Literaturementioning
confidence: 99%
“…Several classical machine learning techniques have been used to approach imagined speech decoding for EEG-based BCI systems. Some on the most common algorithms include Linear Discriminant Analysis (LDA) (Chi et al, 2011 ; Song and Sepulveda, 2014 ; Lee et al, 2021b ), Support Vector Machines (SVM) (DaSalla et al, 2009 ; Garćıa et al, 2012 ; Kim et al, 2013 ; Riaz et al, 2014 ; Sarmiento et al, 2014 ; Zhao and Rudzicz, 2015 ; Arjestan et al, 2016 ; González-Castañeda et al, 2017 ; Hashim et al, 2017 ; Cooney et al, 2018 ; Moctezuma and Molinas, 2018 ; Agarwal and Kumar, 2021 ), Random Forests (RF) (González-Castañeda et al, 2017 ; Moctezuma and Molinas, 2018 ; Moctezuma et al, 2019 ), k-Nearest-Neighbors (kNN) (Riaz et al, 2014 ; Bakhshali et al, 2020 ; Agarwal and Kumar, 2021 ; Rao, 2021 ; Dash et al, 2022 ), Naive Bayes (Dash et al, 2020a ; Agarwal and Kumar, 2021 ; Iliopoulos and Papasotiriou, 2021 ; Lee et al, 2021b ), and Relevance Vector Machines (RVM) (Liang et al, 2006 ; Matsumoto and Hori, 2014 ). Furthermore, deep learning approaches have recently taken a huge role for imagined speech recognition.…”
Section: Classification Techniques In Literaturementioning
confidence: 99%
“…was achieved using different computational models Butson et al [2007], Butenko et al [2020], Rubin and Terman [2004], Popovych and Tass [2019], Spiliotis et al [2022], Fleming et al [2020]. However, due to the strongly heterogeneous nature of the connection topology and intrinsic complexity (stochastic and nonlinear large scale neuron, multiple scales) Spiliotis and Siettos [2011], Siettos and Starke [2016, Deco et al 2008, 2013], Bassett and Bullmore [2017], Iliopoulos and Papasotiriou [2022], DBS outcome is far from trivial to predict.…”
Section: Introductionmentioning
confidence: 99%
“…was achieved using different computational models [34][35][36][37][38][39] . However, due to the strongly heterogeneous nature of the connection topology and intrinsic complexity (stochastic and nonlinear large scale neuron, multiple scales) [40][41][42][43][44][45] , DBS outcome is far from trivial to predict.…”
Section: Introductionmentioning
confidence: 99%