2011
DOI: 10.33899/iqjoss.2011.27897
|View full text |Cite
|
Sign up to set email alerts
|

Conjugate Gradient Back-propagation with Modified Polack –Rebier updates for training feed forward neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…2.5. Conjugate gradient backpropagation with Polak-Ribiére updates (CGP) [37] The search direction in the Fletcher-Reeves algorithm is determined using equation (7…”
Section: Gradient Descent With Momentum Backpropagation (Traingdm) [34]mentioning
confidence: 99%
“…2.5. Conjugate gradient backpropagation with Polak-Ribiére updates (CGP) [37] The search direction in the Fletcher-Reeves algorithm is determined using equation (7…”
Section: Gradient Descent With Momentum Backpropagation (Traingdm) [34]mentioning
confidence: 99%
“…The evolution of NNs in control systems spans decades, progressing from single-layer networks in the 1950s to multilayer networks with the advent of the back-propagation algorithm in 1986. Overcoming challenges associated with deep NNs took another two decades, culminating in the development of DL, addressing issues like computational load and vanishing gradient [8,[21][22][23]. Control engineering has witnessed a surge in research on model-free controllers (MFCs), particularly data-driven controllers, over the last two decades.…”
Section: Introductionmentioning
confidence: 99%