2019
DOI: 10.48084/etasr.2455
|View full text |Cite
|
Sign up to set email alerts
|

Systems Modeling Using Deep Elman Neural Network

Abstract: In this paper, the modeling of complex systems using deep Elman neural network architecture is improved. The emphasis is to retrieve better deep Elman structure that emulates perfectly such dynamic systems. To achieve this goal, sigmoid activation functions in the hidden and output layers nodes are chosen and data files on considered systems for modeling and validation steps are given. Simulation results prove the ability and the efficiency of a deep Elman neural network with two hidden layers in this task.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 34 publications
0
6
0
Order By: Relevance
“…where α, β 1 and β 2 are the self-connected feedback gain factors of the three context layers. Equations (16)(17)(18) can combine past information with information at the current time step, and α, β 1 and β 2 can adjust the weight of past information. The inheritance of historical state is superimposed on the context layer of the current time step through a parameter, which can realize the memory of the historical state.…”
Section: Methods Of Full Feedbackmentioning
confidence: 99%
See 2 more Smart Citations
“…where α, β 1 and β 2 are the self-connected feedback gain factors of the three context layers. Equations (16)(17)(18) can combine past information with information at the current time step, and α, β 1 and β 2 can adjust the weight of past information. The inheritance of historical state is superimposed on the context layer of the current time step through a parameter, which can realize the memory of the historical state.…”
Section: Methods Of Full Feedbackmentioning
confidence: 99%
“…Same as the Jordan-NARX mode, the x k and y k at each time step is related to the state and output at last time step. If it goes back all the way, it will be related to the output at previous time step, for the reason that t d can be set to 1, If so, the context layer can be simplified to one-step time delay feedback, (16)(17)(18) are simplified as…”
Section: Methods Of Full Feedbackmentioning
confidence: 99%
See 1 more Smart Citation
“…A. Plant Network Neural networks are known as a powerful technique to model a system with a mathematical model unknown or difficult to build [20,21]. In this section, a neural network is used to model the plant which consists of the pile, the excavator, and the tracking controller.…”
Section: Network Training and Simulationmentioning
confidence: 99%
“…The Gated Recurrent Unit (GRU), a variant of RNN, is selected as the basic ML model for the proposed E2E architecture. RNN is selected due to its efficient information handling with smaller context [19]. GRU is computationally simpler as compared to other RNN variants.…”
Section: B the E2e ML Modelmentioning
confidence: 99%