2006 8th International Conference on Signal Processing 2006
DOI: 10.1109/icosp.2006.345512
|View full text |Cite
|
Sign up to set email alerts
|

Noise Robust Vocoding at 2400 bps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2006
2006
2015
2015

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…A major focal point was the DARPA Advanced Speech Encoding Program (ASE) of the early 2000's, which funded research on low bit rate speech synthesis "with acceptable intelligibility, quality, and aural speaker recognizability in acoustically harsh environments", thus spurring developments in speech processing using a variety of mechanical and electromagnetic glottal activity sensors ;Tardelli Ed. (2004); Preuss et al (2006); Quatieri et al (2006)).…”
Section: Historical Frameworkmentioning
confidence: 99%
See 3 more Smart Citations
“…A major focal point was the DARPA Advanced Speech Encoding Program (ASE) of the early 2000's, which funded research on low bit rate speech synthesis "with acceptable intelligibility, quality, and aural speaker recognizability in acoustically harsh environments", thus spurring developments in speech processing using a variety of mechanical and electromagnetic glottal activity sensors ;Tardelli Ed. (2004); Preuss et al (2006); Quatieri et al (2006)).…”
Section: Historical Frameworkmentioning
confidence: 99%
“…(2004); Quatieri et al (2006)). In Ng et al (2000), perfectly intelligible speech was obtained using a GEMS device (described below) from a signal with an initial signal to noise ratio of only 3 dB, while excellent results on noise robust vocoding in three harsh military noise environments using GEMS and PMIC (description below) are reported in Preuss et al (2006).…”
Section: Accepted Manuscriptmentioning
confidence: 99%
See 2 more Smart Citations
“…In particular, silent speech recognition systems (SSRSs) enable speech communication to be needed when an audible acoustic signal is unavailable [1]. In addition to "physical" SSRSs [2][3][4][5], in the "electrical" ones, articulation may be inferred from actuator muscle signals or predicted using command signals obtained directly from the brain. Especially, the latter could be speech prosthesis for individuals with severe communication impairments.…”
Section: Introductionmentioning
confidence: 99%