2015
DOI: 10.1007/978-81-322-2523-2_77
|View full text |Cite
|
Sign up to set email alerts
|

Time Domain Analysis of EEG to Classify Imagined Speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…Fourth-order Daubechies were used to achieve the results presented in [56] and were therefore used here. A feature vector was constructed using the RWE of decomposition levels D2 (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32), D3 (8-16 Hz), D4 (4-8 Hz), D5 (2-4 Hz), and A5 (<2 Hz), for each channel. This resulted in a 30-element feature vector for each trial.…”
Section: Benchmark Machine Learning Classifiersmentioning
confidence: 99%
See 2 more Smart Citations
“…Fourth-order Daubechies were used to achieve the results presented in [56] and were therefore used here. A feature vector was constructed using the RWE of decomposition levels D2 (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32), D3 (8-16 Hz), D4 (4-8 Hz), D5 (2-4 Hz), and A5 (<2 Hz), for each channel. This resulted in a 30-element feature vector for each trial.…”
Section: Benchmark Machine Learning Classifiersmentioning
confidence: 99%
“…Here, six frequency bands are used to construct the filter bank. These are delta (2-4 Hz), theta (4-8 Hz), mu (8-12 Hz), lower beta (12-18 Hz), upper beta (18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28), and gamma (28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40). Three HPs were selected for optimization, i.e., (1).…”
Section: Benchmark Machine Learning Classifiersmentioning
confidence: 99%
See 1 more Smart Citation
“…This process can be carried on the time domain, frequency domain, and spatial domain. In the time domain, the feature extraction process is often done through statistical analysis, obtaining statistical features such as standard deviation (SD), root mean square (RMS), mean, variance, sum, maximum, minimum, Hjorth parameters, sample entropy, autoregressive (AR) coefficients, among others (Riaz et al, 2014 ; Iqbal et al, 2016 ; AlSaleh et al, 2018 ; Cooney et al, 2018 ; Paul et al, 2018 ; Lee et al, 2019 ). On the other hand, the most common methods used to extract features from the frequency domain include Mel Frequency Cepstral Coefficients (MFCC), Short-Time Fourier transform (STFT), Fast Fourier Transform (FFT), Wavelet Transform (WT), Discrete Wavelet Transform (DWT), and Continuous Wavelet Transform (CWT) (Riaz et al, 2014 ; Salinas, 2017 ; Cooney et al, 2018 ; Garćıa-Salinas et al, 2018 ; Panachakel et al, 2019 ; Pan et al, 2021 ).…”
Section: Feature Extraction Techniques In Literaturementioning
confidence: 99%
“…Speech-BCI or neural silent speech interface (NSSI), a novel technological paradigm that aims to convert neural signals to speech (text, acoustics, or articulatory parameters) and then drive a text-or articulatory-to-speech synthesizer [10], has been recently demonstrated to be possible from either invasive or non-invasive neural signals [11][12][13]. Although a few studies on speech decoding based EEG-BCIs have been investigated such as "yes" or "no" classification [14], binary phoneme [15] or syllable classification [16,17], they suffer from intermediate performance possibly due to the low spatial resolution and low signal-to-noise ratio of EEG signals. Invasive Electrocorticography (ECoG) and non-invasive Magnetoencephalography (MEG) have recently shown a higher potential in speech decoding [11][12][13][18][19][20].…”
Section: Introductionmentioning
confidence: 99%