1999
DOI: 10.1006/csla.1998.0118
|View full text |Cite
|
Sign up to set email alerts
|

Interpolation of n-gram and mutual-information based trigger pair language models for Mandarin speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2001
2001
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…The main idea of the TR-classifier is based on computing the average mutual information of each couple of words which belong to the vocabulary V i . Couples of words or "triggers" that are considered important for a topic identification task, are those which have the highest average mutual information (AM I) values [8,18]. Each topic is then endowed with a number of selected triggers M , calculated using training corpora of topic T i .…”
Section: An Overview On Tr-classifiermentioning
confidence: 99%
“…The main idea of the TR-classifier is based on computing the average mutual information of each couple of words which belong to the vocabulary V i . Couples of words or "triggers" that are considered important for a topic identification task, are those which have the highest average mutual information (AM I) values [8,18]. Each topic is then endowed with a number of selected triggers M , calculated using training corpora of topic T i .…”
Section: An Overview On Tr-classifiermentioning
confidence: 99%
“…On the one hand, the Normalized Mutual Information (NMI), previously used for the estimation of parameters of acoustic models for speech recognition [1], or for the adaptation of trigger-based LMs [5]. On the other hand, a minimization of the global perplexity of a LM obtained as the interpolation of all the clusters considered.…”
Section: Introductionmentioning
confidence: 99%
“…Language modeling is the attempt to characterize, capture and exploit the regularities and constraints in natural language. Among various language modeling approaches, ngram modeling has been widely used in many applications, such as speech recognition, machine translation (Katz 1987;Jelinek 1989;Gale and Church 1990;Brown et al 1992;Yang et al 1996;Bai et al 1998;Zhou et al 1999;Rosenfeld 2000;Gao et al 2002). Although ngram modeling is simple in nature and easy to use, it has obvious deficiencies.…”
Section: Introductionmentioning
confidence: 99%
“…Psychological experiments in Meyer et al (1975) indicated that the human's reaction to a highly associated word pair was stronger and faster than that to a poorly associated word pair. Such preference information is very useful for natural language processing (Church et al 1990;Hiddle et al 1993;Rosenfeld 1994;Zhou et al1998;Zhou et al 1999). Obviously, the preference relationships between words can expand from a short to long distance.…”
Section: Introductionmentioning
confidence: 99%