Interspeech 2018 2018
DOI: 10.21437/interspeech.2018-2484
|View full text |Cite
|
Sign up to set email alerts
|

Articulation-to-Speech Synthesis Using Articulatory Flesh Point Sensors’ Orientation Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 14 publications
0
19
0
1
Order By: Relevance
“…This has the main idea of recording the soundless articulatory movement, and automatically generating speech from the movement information, while the subject is not producing any sound. For this automatic conversion task, typically electromagnetic articulography (EMA) [2,3,4,5], ultrasound tongue imaging (UTI) [6,7,8,9,10,11,12,13], permanent magnetic articulography (PMA) [14,15], surface electromyography (sEMG) [16,17,18], Non-Audible Murmur (NAM) [19] or video of the lip movements [7,20] are used.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This has the main idea of recording the soundless articulatory movement, and automatically generating speech from the movement information, while the subject is not producing any sound. For this automatic conversion task, typically electromagnetic articulography (EMA) [2,3,4,5], ultrasound tongue imaging (UTI) [6,7,8,9,10,11,12,13], permanent magnetic articulography (PMA) [14,15], surface electromyography (sEMG) [16,17,18], Non-Audible Murmur (NAM) [19] or video of the lip movements [7,20] are used.…”
Section: Introductionmentioning
confidence: 99%
“…There are two distinct ways of SSI solutions, namely 'direct synthesis' and 'recognition-and-synthesis' [21]. In the first case, the speech signal is generated without an intermediate step, directly from the articulatory data, typically using vocoders [4,5,6,8,9,10,11,15,17]. In the second case, silent speech recognition (SSR) is applied on the biosignal which extracts the content spoken by the person (i.e.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, direct speech generation from the captured biosignals (see Section III-B) is another possibility, having this approach the potential to restore the person's own voice, if enough recordings of the pre-laryngectomy voice are available for training [96]. This second approach has been also validated for various modalities, including sEMG [31], [97]- [100], PMA [25], [96], [101]- [103], video-and-ultrasound [104]- [106] and Doppler signals [107]. To sum up, the foundations have been laid for a future SSI-based device for postlaryngectomy speech rehabilitation.…”
Section: Voice Disordersmentioning
confidence: 99%
“…Biosignal-based speech communication has shown increasing promise towards various clinical applications [1] such as silent speech interface (SSI), which directly converts non-audio articulatory information to speech to help individuals who have lost their ability of speech production but can still articulate silently (e.g., laryngectomees) [2]. Besides novel tongue and lip motion tracking devices, current SSI research is focused on developing algorithms that can directly map the articulatory information to speech (text/acoustics) accurately and efficiently [2][3][4][5][6].…”
Section: Introductionmentioning
confidence: 99%