2020 28th European Signal Processing Conference (EUSIPCO) 2021
DOI: 10.23919/eusipco47968.2020.9287498
|View full text |Cite
|
Sign up to set email alerts
|

Generating EEG features from Acoustic features

Abstract: In this paper we demonstrate predicting electroencephalography (EEG) features from acoustic features using recurrent neural network (RNN) based regression model and generative adversarial network (GAN). We predict various types of EEG features from acoustic features. We compare our results with the previously studied problem on speech synthesis using EEG and our results demonstrate that EEG features can be generated from acoustic features with lower root mean square error (RMSE), normalized RMSE values compare… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(21 citation statements)
references
References 14 publications
0
21
0
Order By: Relevance
“…Generating EEG features from Acoustic features (Krishna et al 2020, EUSIPCO): In Krishna et al (2021a), the authors developed an RNN-based forward regression model. It can be seen as the inverse problem of the EEG-based synthesis from Krishna et al (2020).…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
“…Generating EEG features from Acoustic features (Krishna et al 2020, EUSIPCO): In Krishna et al (2021a), the authors developed an RNN-based forward regression model. It can be seen as the inverse problem of the EEG-based synthesis from Krishna et al (2020).…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
“…This may be the reason for the plenty of articles that have been published. Various features for both language and dialect recognition are available in the literature such as MFCC (see table (3)), singular values decomposition and Linear Predictive Codes (LPC). Koolagudi et al extracted several MFCC sets, which were different in dimension (6, 8, 13, 19, 21, 29 and 35), from the speech signal for identifying a language such as Bangla, Assamese, Guajarati, Hindi, Nepali, Kashmiri, Malayalam, Marathi, Rajasthani, Odia, Punjabi, Tamil, Telugu, Kannada, and Urdu.…”
Section: Language and Dialect Recognitionmentioning
confidence: 99%
“…The feature extraction is the process of extracting and tackling hidden information in the raw data signal [3]. Data can be more manageable by applying feature extraction because it removes all ineffective features from the data without losing any important or relevant data.…”
Section: Introductionmentioning
confidence: 99%
“…Further, Roy et al analyzed generated samples both qualitatively and quantitatively. Similarly, Krishna et al ( 2020 ) constructed a gated recurrent unit (GRU) (Chung et al, 2014 )-based generator and a GRU-based discriminator with the GAN loss function, i.e., Equation (1). Thus, Krishna et al augmented EEG data for speech recognition and achieved performance improvement.…”
Section: Advances In Data Augmentationmentioning
confidence: 99%