2007
DOI: 10.1109/mwscas.2007.4488547
|View full text |Cite
|
Sign up to set email alerts
|

A new wavelet function for audio and speech processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2009
2009
2012
2012

Publication Types

Select...
2
1
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…However if the speech signals are processed taking into account the form in which they are perceived by the human ear, similar or even better results may be obtained. Thus using an ear model-based feature extraction method might represent an attractive alternative, since this approach allows characterizing the speech signal in the form that it is perceived [8]. This section proposes a feature extraction method based on an inner ear model, which takes into account the fundamentals concepts of critical bands.…”
Section: Feature Extraction Methods Based On Wavelet Transformmentioning
confidence: 99%
“…However if the speech signals are processed taking into account the form in which they are perceived by the human ear, similar or even better results may be obtained. Thus using an ear model-based feature extraction method might represent an attractive alternative, since this approach allows characterizing the speech signal in the form that it is perceived [8]. This section proposes a feature extraction method based on an inner ear model, which takes into account the fundamentals concepts of critical bands.…”
Section: Feature Extraction Methods Based On Wavelet Transformmentioning
confidence: 99%
“…However, if the speech signals are processed taking in account the form in which they are perceived by the human ear, similar or even better results may be obtained. Thus, the use of an ear model-based feature extraction method may be an attractive alternative because this approach allows characterizing the speech signal in the form that it is perceived [16]. Thus, a feature extraction method, based on an inner ear model taking into account the fundamentals concepts of critical bands, will be developed.…”
Section: Feature Vector Extractionmentioning
confidence: 99%
“…In all cases, 650 different alaryngeal voiced segments with a convergence factor equal to 0.009 are used, achieving a global mean square error of 0.1 after 400,000 iterations. [16]. Figure 4 shows the plot of mono-aural recordings of the Spanish word "abeja", pronounced by normal and esophageal speakers with a sample frequency of 8 kHz, respectively, including the detected voiced segments.…”
Section: Classification Stagementioning
confidence: 99%