2013
DOI: 10.1109/tasl.2012.2221461
|View full text |Cite
|
Sign up to set email alerts
|

Classification and Ranking Approaches to Discriminative Language Modeling for ASR

Abstract: Abstract-Discriminative language modeling (DLM) is a feature-based approach that is used as an error-correcting step after hypothesis generation in automatic speech recognition (ASR). We formulate this both as a classification and a ranking problem and employ the perceptron, the margin infused relaxed algorithm (MIRA) and the support vector machine (SVM). To decrease training complexity, we try count-based thresholding for feature selection and data sampling from the list of hypotheses. On a Turkish morphology… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 16 publications
(28 reference statements)
0
7
0
Order By: Relevance
“…Some work has been done also on optimizing LMs for ASR purposes, using discriminative training. Kuo et al (2002), Roark et al (2007) and Dikici et al (2013) all improve LM for ASR by maximizing the probability of the correct candidates. All of them use candidates of ASR systems as "negative" examples and train n-gram LMs or use linear models for classifying candidates.…”
Section: Related Workmentioning
confidence: 99%
“…Some work has been done also on optimizing LMs for ASR purposes, using discriminative training. Kuo et al (2002), Roark et al (2007) and Dikici et al (2013) all improve LM for ASR by maximizing the probability of the correct candidates. All of them use candidates of ASR systems as "negative" examples and train n-gram LMs or use linear models for classifying candidates.…”
Section: Related Workmentioning
confidence: 99%
“…DLM [9], [27], [34] learns patterns of errors in the N-best hypotheses output by a speech recognizer, and adjusts the hypotheses' scores so that the one with the least errors is selected. The score can be modified simply using the inner product of a feature vector φ(s) extracted from a hypothesis s and a weight vector w. The re-scored best hypothesisŝ (r) is then obtained as:…”
Section: Discriminative Language Modelingmentioning
confidence: 99%
“…In the ASR post-processing step, we propose to use a rescoring technique based on a simple combination of discriminative language modeling (DLM) [9], [27], [34] and minimum Bayes risk (MBR) decoding [5], [14], [24], [45]. In contrast with [24], which performs DLM with the MBR criterion, our work combines DLM and MBR decoding in a cascade form; we simply use the re-ranked 1-best obtained through DLM to initialize the MBR decoding.…”
Section: Introductionmentioning
confidence: 99%
“…There is a lot of research in N-best rescoring in the natural language processing (NLP) and speech domains [6,7]. Most existing speech reranking approaches focus on reranking lattices with a large language model or with additional features [5,8,9,10,6,11]. Reranking with additional features usually works in the scenarios where there is additional knowledge that is not available or too expensive to compute in the first pass, such as syntactic features [12,13] or morphological features and N-best edit distance features [14].…”
Section: Related Workmentioning
confidence: 99%