2013 IEEE International Conference on Acoustics, Speech and Signal Processing 2013
DOI: 10.1109/icassp.2013.6639306
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent neural network language modeling for code switching conversational speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
63
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 75 publications
(63 citation statements)
references
References 11 publications
0
63
0
Order By: Relevance
“…In addition to the well-established research line in linguistics, implications of CS and other kinds of language switches for speechto-text systems have recently received some research interest, resulting in some robust acoustic modeling [1][2][3][4][5] and language modeling [6][7][8] approaches for CS speech. Language identification (LID) is a relevant task for the automatic speech recognition (ASR) of CS speech [9][10][11][12].…”
Section: Introductionmentioning
confidence: 99%
“…In addition to the well-established research line in linguistics, implications of CS and other kinds of language switches for speechto-text systems have recently received some research interest, resulting in some robust acoustic modeling [1][2][3][4][5] and language modeling [6][7][8] approaches for CS speech. Language identification (LID) is a relevant task for the automatic speech recognition (ASR) of CS speech [9][10][11][12].…”
Section: Introductionmentioning
confidence: 99%
“…Especially, we design a novel feature by applying recurrent neural network language model (RNNLM) to OCR confusion networks (c-nets). Since its first notable application in speech recognition [9], RNNLM has gained increased attention by its success in a variety of tasks, such as speech recognition, machine translation, OCR, and keyword spotting [9][10][11][12][13]. To the best of our knowledge, our work is the first application of RNNLM in OCR error detection and shows it effectively improves error detection rate.…”
Section: Introductionmentioning
confidence: 94%
“…;/ $%/0+.5 0+ compute the LM scores. For this reason, works using RNNLM often applied it to n-best rescoring because it is practically difficult to efficiently utilize wide context (more than 3-gram) on lattices [9][10][11]13]. Works that applied RNNLM to lattices usually made approximations to compute the LM scores [12, [27][28][29].…”
Section: A Recurrent Neural Network Language Modelmentioning
confidence: 99%
“…Impact of CS and other kinds of language switches on the speech-to-text systems have recently received research interest, resulting in several robust acoustic modeling [2][3][4][5][6][7][8] and language modeling [9][10][11] approaches for CS speech. Language identification (LID) is a relevant task for the automatic speech recognition (ASR) of CS speech [12][13][14][15].…”
Section: Introductionmentioning
confidence: 99%