2017
DOI: 10.1109/cc.2017.8068761
|View full text |Cite
|
Sign up to set email alerts
|

Long short-term memory recurrent neural network-based acoustic model using connectionist temporal classification on a large-scale training corpus

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 54 publications
(19 citation statements)
references
References 15 publications
0
18
0
1
Order By: Relevance
“…This organization makes RNNs memory-centric in operation since they have to store weight synapses in available memory. A type of RNN referred to as Long Short Term Memory (LSTM) is gaining interest in recent times and process to be more effective than conventional RNNs [ 117 , 118 , 119 , 120 ]. Khan et al [ 121 ] developed a data analytic framework with the real-world application using Spark Machine Learning and LSTM techniques.…”
Section: Embedded Machine Learning Techniquesmentioning
confidence: 99%
“…This organization makes RNNs memory-centric in operation since they have to store weight synapses in available memory. A type of RNN referred to as Long Short Term Memory (LSTM) is gaining interest in recent times and process to be more effective than conventional RNNs [ 117 , 118 , 119 , 120 ]. Khan et al [ 121 ] developed a data analytic framework with the real-world application using Spark Machine Learning and LSTM techniques.…”
Section: Embedded Machine Learning Techniquesmentioning
confidence: 99%
“…Due to the large amounts of information during the past two decades, there emerged a need for technical methods of classifying huge texts as statistical and manual methods had never become useless in this field [35][36][37][38][39][40][41][42][43][44][45][46][47][48]. Therefore, machine learning classification techniques appeared which aim to classify unstructured texts and documents based on certain algorithms designed for this purpose.…”
Section: Machine Learning Techniquesmentioning
confidence: 99%
“…As for the output port, it controls the output after the activation process, which takes place at the entry gate. A portal called the forget gate, which addresses the weakness of LSTM in determining flows for specific units, has also been added [46]. Figure 6 shows the structure of LSTM RNN.…”
Section: Long Short-term Memory (Lstm)mentioning
confidence: 99%
“…Recent advances in deep learning techniques realized continuous speech recognition at stable performance [8][9][10]. However, the high computational intensity of deep learning algorithms requires high-performance hardware.…”
Section: Conventional Speech Recognition Schemesmentioning
confidence: 99%