2017
DOI: 10.1007/s00521-017-3076-7
|View full text |Cite
|
Sign up to set email alerts
|

A novel optimized GA–Elman neural network algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
10

Relationship

1
9

Authors

Journals

citations
Cited by 57 publications
(20 citation statements)
references
References 21 publications
0
20
0
Order By: Relevance
“…Thus, we chose the Elman neural network to build the model. The Elman neural network structure is shown in Figure 1: The Elman neural network consists of four layers, which are input layer, hidden layer, context layer and output layer [11]. The input layer transmits external signals to the hidden layer.…”
Section: Elman Neural Network (Enn)mentioning
confidence: 99%
“…Thus, we chose the Elman neural network to build the model. The Elman neural network structure is shown in Figure 1: The Elman neural network consists of four layers, which are input layer, hidden layer, context layer and output layer [11]. The input layer transmits external signals to the hidden layer.…”
Section: Elman Neural Network (Enn)mentioning
confidence: 99%
“…Elman neural network has been widely used in many fields. 34,35 However, Elman neural network carries forward the advantages of the back-propagation (BP) algorithm and, at the same time, it also inevitably inherited the some inherent disadvantages of the BP network, such as easy to trap in local minimum, resulting in training failure; the learning rate is fixed, which limits the network's convergence rate; the number of hidden neurons is difficult to determine, thus artificial attempts to waste a lot of time. These deficiencies limit the neural network transport efficiency and recognition accuracy.…”
Section: Thought Of Ga-elman Algorithmmentioning
confidence: 99%
“…Their work is tested on LSTMs using image captioning, speech recognition, and language processing applications showing a speedup of 5.1, 44.9 and 1.53, respectively. Recently, GA is introduced into the Elman architecture to accelerate the training and prevent the local minima problem [18]. GA-Elman outperformes traditional training algorithms in terms of convergence speed and accuracy.…”
Section: Parallelizing Iterative Training Algorithms For Rnnmentioning
confidence: 99%