1997
DOI: 10.1016/s0167-6393(97)00031-9
|View full text |Cite
|
Sign up to set email alerts
|

Verifying and correcting recognition string hypotheses using discriminative utterance verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2003
2003
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…The above models K T and K C can be estimated based on different criteria, such as maximum likelihood (ML), or minimum verification error (MVE), etc. shows the ML-trained models already significantly surpass the conventional UV methods, such as in (Sukkar and Lee, 1996;Sukkar et al, 1997;Rahim et al, 1997;Rahim and Lee, 1997a).…”
Section: In-search Data Selection For Accurate Competing Modelsmentioning
confidence: 96%
See 1 more Smart Citation
“…The above models K T and K C can be estimated based on different criteria, such as maximum likelihood (ML), or minimum verification error (MVE), etc. shows the ML-trained models already significantly surpass the conventional UV methods, such as in (Sukkar and Lee, 1996;Sukkar et al, 1997;Rahim et al, 1997;Rahim and Lee, 1997a).…”
Section: In-search Data Selection For Accurate Competing Modelsmentioning
confidence: 96%
“…In , it is found that the minimum classification error (MCE) training, which is originally proposed to reduce recognition errors, can contribute to improving performance of UV. In (Rahim and Lee, 1997a;Sukkar et al, 1997), a GPD-based training algorithm is proposed to achieve minimum verification error (MVE) estimation for utterance verification with respect to optimizing verification HMM parameters. In MVE, the string-level verification errors are approximated by using a sigmoid function embedded with a misverification function, which actually is negative log-likelihood ratio used in verification.…”
Section: As Utterance Verificationmentioning
confidence: 99%
“…Train SpeakersGMMs using Set 12 ; Train WorldGMM using WorldSet 12 ; foreach TrainSession in (3,4,5,6) Calculate Thresholds using Set TrainSession , SpeakersGMMs, WorldGMM; foreach TestSession in (3,4,5,6) and TestSession =TrainSession Obtain %FA and %FR using Set TestSession , SpeakersGMMs, WorldGMM, Thresholds;…”
Section: A Priori Authenticationmentioning
confidence: 99%
“…One is based on verifying the actual content of the speech as a means of verifying the speaker identity [3][4][5]. This can be considered as the classical "password-based" approach and implies that the speech content must be kept secret and attached to the speaker.…”
Section: Introductionmentioning
confidence: 99%
“…Here we use the same notation for the word and the corresponding model interchangeably without ambiguity. In practice, p(X u |w u ) is represented by some functional forms such as a linear combination of the likelihood values for several models, e.g., the background and the impostor models [2], or the anti-keyword and the filler models [5]. The legitimacy of SLR-based hypothesis testing depends on the conditional probability density function (pdf) being accurate for each word, and in the case of continuous speech recognition, it also depends on the condition that the acoustic token X u is correctly segmented.…”
Section: Introductionmentioning
confidence: 99%