2011 IEEE Workshop on Automatic Speech Recognition &Amp; Understanding 2011
DOI: 10.1109/asru.2011.6163916
|View full text |Cite
|
Sign up to set email alerts
|

Derivative kernels for noise robust ASR

Abstract: Abstract-Recently there has been interest in combined generative/discriminative classifiers. In these classifiers features for the discriminative models are derived from generative kernels. One advantage of using generative kernels is that systematic approaches exist how to introduce complex dependencies beyond conditional independence assumptions. Furthermore, by using generative kernels model-based compensation/adaptation techniques can be applied to make discriminative models robust to noise/speaker conditi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
25
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(27 citation statements)
references
References 14 publications
2
25
0
Order By: Relevance
“…To summarize our experimental results, by empirically tuning the phase factor, we achieved 16.8% WER for JUD/SGMM system on the Aurora 4 corpus, which is comparable to the state-of-the-art noise compensation results on this task [42], [46]. Further improvements have been observed by VTS-based noise adaptive training [39], joint speaker/noise compensation [42], and discriminative adaptive training [46].…”
Section: Discussionsupporting
confidence: 61%
“…To summarize our experimental results, by empirically tuning the phase factor, we achieved 16.8% WER for JUD/SGMM system on the Aurora 4 corpus, which is comparable to the state-of-the-art noise compensation results on this task [42], [46]. Further improvements have been observed by VTS-based noise adaptive training [39], joint speaker/noise compensation [42], and discriminative adaptive training [46].…”
Section: Discussionsupporting
confidence: 61%
“…Joint learning {位, 伪} in the large margin framework will be investigated in the future. Future work will also involve the kernelization of the proposed structured SVM to support high dimensional feature spaces such as the derivative feature space [28].…”
Section: Discussionmentioning
confidence: 99%
“…This feature space concatenates the log-likelihoods from all models, including the correct model and competing ones, to yield additional information from the observations. More general feature-spaces, such as derivative ones [28], can relax the conditional independence assumption. Using the above joint feature-spaces the dot-product of the 蠁(O, w; 胃) and structured SVM parameter 伪 can be evaluated by accumulating every segment score [14] …”
Section: A Joint Feature Spacementioning
confidence: 99%
See 1 more Smart Citation
“…Features for largevocabulary systems will be extracted per phone, like in [20], so that the segmentation is likely to have greater impact on performance.…”
Section: Methodsmentioning
confidence: 99%