Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005.
DOI: 10.1109/icassp.2005.1415243
|View full text |Cite
|
Sign up to set email alerts
|

Adaptation Strategies for the Acoustic and Language Models in Bilingual Speech Transcription

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0
1

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 9 publications
0
4
0
1
Order By: Relevance
“…Para medir a qualidade dos modelos de linguagem construídos com n-gramas extraídos a partir de corpora (Chen & Goodman, 1996;Sennrich, 2012;Dieguez-Tirado et al, 2005) utilizamos a perplexidade:…”
Section: Perplexidadeunclassified
“…Para medir a qualidade dos modelos de linguagem construídos com n-gramas extraídos a partir de corpora (Chen & Goodman, 1996;Sennrich, 2012;Dieguez-Tirado et al, 2005) utilizamos a perplexidade:…”
Section: Perplexidadeunclassified
“…Perplexity is frequently used as a quality measure for language models built with n-grams extracted from text corpora (Chen and Goodman, 1996;Dieguez-Tirado, Garcia-Mateo, Docio-Fernandez, and Cardenal-Lopez, 2005;Sennrich, 2012). It has also been used in very specific tasks, such as for classifying formal and colloquial tweets (González, 2015), and for identifying closely related languages (Gamallo, Alegria, Pichel, and Agirrezabal, 2016).…”
Section: Perplexity-based Measurementmentioning
confidence: 99%
“…We employ a large vocabulary continuous speech recognizer based on Continuous Hidden Markov Models (CHMM). The recognition engine is a two-pass recognizer: a Viterbi algorithm, which works in a synchronous way with a beam search, and an A * algorithm [5].…”
Section: Baseline Recognition Systemmentioning
confidence: 99%
“…Our topic-adapted recognizers make use of topic-adapted language models generated from SpeechDAT orthographic transcriptions. Topic adaptation is achieved by mixing n-gram models [5]. This mixture is generated in several steps.…”
Section: Topic-adapted Language Modelsmentioning
confidence: 99%
See 1 more Smart Citation