2021
DOI: 10.3390/s21206744
|View full text |Cite
|
Sign up to set email alerts
|

Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification

Abstract: In this work, we focus on silent speech recognition in electroencephalography (EEG) data of healthy individuals to advance brain–computer interface (BCI) development to include people with neurodegeneration and movement and communication difficulties in society. Our dataset was recorded from 270 healthy subjects during silent speech of eight different Russia words (commands): `forward’, `backward’, `up’, `down’, `help’, `take’, `stop’, and `release’, and one pseudoword. We began by demonstrating that silent wo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(23 citation statements)
references
References 24 publications
0
23
0
Order By: Relevance
“…As shown in tables 4 and 5, the proposed cross-task transfer learning model achieves competitive performance accuracy of 82.35% for the target KaraOne model and 89.01% for the target FEIS model for multiclass classification, in comparison to prior studies with other transfer learning models such as Cooney et al [28] crosssubject learning of accuracy of 35.68%, Vorontsova et al [29] cross-domain learning of accuracy of 84.5%, Tamm et al [30] cross-subject learning of accuracy of 24.77%, and Panachakel et al [31] cross-domain learning of accuracy of 79.7%-95.5%. The results show that transfer learning procedures are significant for the generalizability of decoding imagined speech EEG signals, despite the difficulties of comparison across state-ofthe-art transfer learning research where the investigations were with distinct datasets.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 73%
See 1 more Smart Citation
“…As shown in tables 4 and 5, the proposed cross-task transfer learning model achieves competitive performance accuracy of 82.35% for the target KaraOne model and 89.01% for the target FEIS model for multiclass classification, in comparison to prior studies with other transfer learning models such as Cooney et al [28] crosssubject learning of accuracy of 35.68%, Vorontsova et al [29] cross-domain learning of accuracy of 84.5%, Tamm et al [30] cross-subject learning of accuracy of 24.77%, and Panachakel et al [31] cross-domain learning of accuracy of 79.7%-95.5%. The results show that transfer learning procedures are significant for the generalizability of decoding imagined speech EEG signals, despite the difficulties of comparison across state-ofthe-art transfer learning research where the investigations were with distinct datasets.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 73%
“…Cooney et al [28] experimented with inter-subject information transfer, and the results have shown that transfer learning methods can improve model generalizability in the effectiveness of imagined speech decoding on target subjects. Studies by Vorontsova et al [29] demonstrate the application of the transfer learning classifier consisting of ResNet101 and gated recurrent units (GRU) on the nine words of silent speech signals and show that model learning with smaller amounts of signals becomes transferable across a broad population. Tamm et al [30] explored with imagined speech five vowels and six words using the classifier transfer learning approach of inter-subject, yielding a noticeably lower performance.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, there have been important technological and methodological advancements in perceived and imagined speech decoding (Martin et al, 2018;Panachakel & Ramakrishnan, 2021). Recent works focus on the classification of vowels (M. S. Mahmud et al, 2020; N. T. Duc & B. Lee, 2020), syllables (Archila-Meléndez et al, 2018;Brandmeyer et al, 2013;Correia et al, 2015), words (Ossmy et al, 2015;Proix et al, 2022;Vorontsova et al, 2021) and complete sentences (Chakrabarti et al, 2015;Zhang et al, 2012), distinguishing stimuli mainly at the semantic level. The most advanced online decoding techniques rely heavily on the articulatory representation of syllables and words in the motor and supplementary motor cortices (Anumanchipalli et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…Recent studies reported more encouraging results on the multi-class classification system of imaginary speech recognition. Developing an impressive database of eight different Russian words acquired from 270 subjects, the researchers [13] obtained a maximum accuracy of 85% when classifying the nine collected words and 88% for binary classification. The results were obtained using the frequency-domain of the signals and were classified with ResNet18 + 2GRU (gated recurrent unit).…”
Section: State Of the Artmentioning
confidence: 99%