1997
DOI: 10.1006/csla.1997.0026
|View full text |Cite
|
Sign up to set email alerts
|

String-based minimum verification error (SB-MVE) training for speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2000
2000
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…The above models K T and K C can be estimated based on different criteria, such as maximum likelihood (ML), or minimum verification error (MVE), etc. shows the ML-trained models already significantly surpass the conventional UV methods, such as in (Sukkar and Lee, 1996;Sukkar et al, 1997;Rahim et al, 1997;Rahim and Lee, 1997a).…”
Section: In-search Data Selection For Accurate Competing Modelsmentioning
confidence: 96%
See 1 more Smart Citation
“…The above models K T and K C can be estimated based on different criteria, such as maximum likelihood (ML), or minimum verification error (MVE), etc. shows the ML-trained models already significantly surpass the conventional UV methods, such as in (Sukkar and Lee, 1996;Sukkar et al, 1997;Rahim et al, 1997;Rahim and Lee, 1997a).…”
Section: In-search Data Selection For Accurate Competing Modelsmentioning
confidence: 96%
“…In , it is found that the minimum classification error (MCE) training, which is originally proposed to reduce recognition errors, can contribute to improving performance of UV. In (Rahim and Lee, 1997a;Sukkar et al, 1997), a GPD-based training algorithm is proposed to achieve minimum verification error (MVE) estimation for utterance verification with respect to optimizing verification HMM parameters. In MVE, the string-level verification errors are approximated by using a sigmoid function embedded with a misverification function, which actually is negative log-likelihood ratio used in verification.…”
Section: As Utterance Verificationmentioning
confidence: 99%
“…misclassification measure), similar to log-likelihood ratio approach in the Bayes decision theory [27]. It was shown that discriminative learning outperforms a binary classification manner in automatic speech recognition and applied in minimum error classification [28] and minimum verification error [29]. The second approach explores the maximal figure-of-merit (MFoM) [30], [31] learning solution, which allows us to approximate the metrics of interest, namely the micro-F1 and equal error rate (EER), with a differentiable function, so that gradient-based optimization algorithms can be applied to learn DNN parameters.…”
Section: Motivationmentioning
confidence: 99%
“…It may be desirable to estimate the parameters of a logistic regression model by maximizing F-measure during training. This is analogous, and in a certain sense equivalent, to empirical risk minimization, which has been used successfully in related areas, such as speech recognition (Rahim and Lee, 1997), language modeling (Paciorek and Rosenfeld, 2000), and machine translation (Och, 2003).…”
Section: Introductionmentioning
confidence: 96%