2016
DOI: 10.1371/journal.pcbi.1005119
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces

Abstract: Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
63
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 72 publications
(63 citation statements)
references
References 52 publications
0
63
0
Order By: Relevance
“…All participants performed an overt speech production task. Particpants P2, P3 and P5 were asked to read aloud short French sentences, which were part of a large articulatory-acoustic corpus acquired previously (Bocquelet et al, 2016b) and made freely available (https://doi.org/10.5281/zenodo.154083). Participant P5 also took part in a protocol involving speech perception, where she was exposed to the sound of computer-generated vowels delivered by a loudspeaker positioned about 50 cm on her left.…”
Section: Task and Stimulimentioning
confidence: 99%
“…All participants performed an overt speech production task. Particpants P2, P3 and P5 were asked to read aloud short French sentences, which were part of a large articulatory-acoustic corpus acquired previously (Bocquelet et al, 2016b) and made freely available (https://doi.org/10.5281/zenodo.154083). Participant P5 also took part in a protocol involving speech perception, where she was exposed to the sound of computer-generated vowels delivered by a loudspeaker positioned about 50 cm on her left.…”
Section: Task and Stimulimentioning
confidence: 99%
“…For example, during speech planning, phonemes are coarticulated-the 17 articulatory gestures that comprise a given phoneme are modified based on neighboring phonemes in 18 the uttered word or phrase (Whalen, 1990). While the dynamic properties of these gestures, similar to 19 kinematics, have been extensively studied (Bocquelet et al, 2016;Bouchard et al, 2016; Carey and 20 McGettigan, 2016;Fabre et al, 2015;Nam et al, 2010;Proctor et al, 2013;Westbury, 1990), there is 21 no direct evidence of gestural representations in the brain. 22 23 production was described as starting in the inferior frontal gyrus, with low-level, non-speech 24 movements elicited in primary motor cortex (M1v;Broca, 1861;Penfield and Rasmussen, 1949).…”
mentioning
confidence: 99%
“…For comparison purposes, we also evaluate mappings using GMMs [4,25,26] and DNNs [13,27], which have been successfully applied by ourselves and other authors to model the articulatory-to-acoustic mapping. For a fair comparison, GMMs and DNNs with approximately the same number of parameters as the RNN architecture (˜1/2 million parameters) are employed.…”
Section: Model Trainingmentioning
confidence: 99%