2016
DOI: 10.1007/978-3-319-43958-7_56
|View full text |Cite
|
Sign up to set email alerts
|

LSTM-Based Language Models for Spontaneous Speech Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…This effectively also increases the performance of the model on the unseen results (Wan et al 2013). Dropout implies that a unit is temporarily dropped from the network, for all its coming and outgoing links (Medennikov and Bulusheva 2016). All elements of an output layer are stored with probability p, alternatively set to 0 with probability (1 -p).…”
Section: š‘‹ (11)mentioning
confidence: 99%
“…This effectively also increases the performance of the model on the unseen results (Wan et al 2013). Dropout implies that a unit is temporarily dropped from the network, for all its coming and outgoing links (Medennikov and Bulusheva 2016). All elements of an output layer are stored with probability p, alternatively set to 0 with probability (1 -p).…”
Section: š‘‹ (11)mentioning
confidence: 99%
“…x āˆ’ x min x max āˆ’ x min (11) where x * is the normalized value, x max is the maximum value in the sample data, x min is the minimum value in the sample data. 800 sets of monitoring data from March 1, 2015 to May 9, 2017 are used as training samples, and 184 sets of monitoring data from May 10, 2017 to November 10, 2017 are used as test samples.…”
Section: Data Preprocessingmentioning
confidence: 99%
“…Long short-term memory (LSTM) network is an effective nonlinear recurrent neural network, which is suitable for language model [9], machine translation [10], speech recognition [11] and other fields. However, although LSTM network model can fully reflect the long-term historical process in input time series data, it cannot excavate the correlation and inherent laws between discontinuous data.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, the feature learning ability of artificial neural network has become increasingly prominent (Hinton and Salakhutdinov, 2006; Lecun et al , 2015). Among them, deep learning (DL) has achieved good performance in image classification (Russakovsky et al , 2014), image segmentation (Ibtehaz and Rahman, 2019) and natural language processing (Medennikov and Bulusheva, 2016). DL completes the prediction of the results by learning the historical feature information of the numerical simulation results.…”
Section: Introductionmentioning
confidence: 99%