2022
DOI: 10.5815/ijmecs.2022.03.03
|View full text |Cite
|
Sign up to set email alerts
|

Enhanced Deep Hierarchal GRU & BILSTM using Data Augmentation and Spatial Features for Tamil Emotional Speech Recognition

Abstract: The Recurrent Neural Network (RNN) is well suited for emotional speech recognition because its uses constantly time shifting property. Even though RNN gives better results GRU, LSTM and BILSTM solves the gradient problem and overfitting problem joins the path to reduces the efficiency. Hence in this paper five deep learning architecture is designed in order to overcome the major issues using data augmentation and spatial feature. Five different architectures like: Enhanced Deep Hierarchal LSTM & GRU (EDHLG), E… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Output Gate (𝒐𝒐 𝒕𝒕 ) : Determines the next hidden state ℎ 𝑡𝑡 . The output is based on the cell state, but in a filtered form, EQU (7) and EQU ( 8)…”
Section: Long Short-term Memory (Lstm) Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Output Gate (𝒐𝒐 𝒕𝒕 ) : Determines the next hidden state ℎ 𝑡𝑡 . The output is based on the cell state, but in a filtered form, EQU (7) and EQU ( 8)…”
Section: Long Short-term Memory (Lstm) Networkmentioning
confidence: 99%
“…The EV battery charge prediction landscape has been characterized by traditional models, such as linear regression, support vector machines, and more straightforward time series forecasting techniques like ARIMA [7]. While these models have shown promise in scenarios with consistent charging patterns, they often falter when confronted with the non-linear and dynamic nature of real-world EV charging data.…”
Section: Introductionmentioning
confidence: 99%
“…The fundamental problem of few-sample learning is that the amount of target task data is not enough, and it is difficult to train a robust learning model. Therefore, augmenting the data can greatly help to solve the few sample problems [10]. Currently, widely used methods of data augmentation include traditional data augmentation methods (such as rotation, translation, scaling, etc.)…”
Section: Few-shot Learningmentioning
confidence: 99%