2012
DOI: 10.1016/j.eswa.2011.08.087
|View full text |Cite
|
Sign up to set email alerts
|

Comparing the performance of neural networks developed by using Levenberg–Marquardt and Quasi-Newton with the gradient descent algorithm for modelling a multiple response grinding process

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
53
0
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 136 publications
(54 citation statements)
references
References 43 publications
0
53
0
1
Order By: Relevance
“…In classical ANNs, the training of the weights is accomplished using Gradient Descent (GD), a gradient based algorithm. Recently, however, Levenberg-Marquardt (LM) algorithm has instead been used for the training purpose as it has shown to outperform GD in a variety of problems [7,8,9,10]. As LM is still a gradient based technique, it can still converge to the local minimum depending on the initial weight values.…”
Section: Modellingmentioning
confidence: 99%
“…In classical ANNs, the training of the weights is accomplished using Gradient Descent (GD), a gradient based algorithm. Recently, however, Levenberg-Marquardt (LM) algorithm has instead been used for the training purpose as it has shown to outperform GD in a variety of problems [7,8,9,10]. As LM is still a gradient based technique, it can still converge to the local minimum depending on the initial weight values.…”
Section: Modellingmentioning
confidence: 99%
“…MLPs utilize the back-propagation (BP) learning technique in conjunction with an optimization method such as gradient descent and Levenberg-Marquardt for training. 105 At completion of a training process, the MLP is capable of giving output solution for any new input based on the generalized mapping that has been developed. 106 2.…”
Section: Ann-based Modelsmentioning
confidence: 99%
“…The former require some partial derivatives to be calculated to tune the parameters of FNN, while the latter do not need derivative information in order to update the parameters of FNN. Gradient descent (GD) (Mukherjee and Routroy, 2012) and genetic algorithms (GAs) (MartinezMartinez et al, 2015) are the most widely used approaches among the existing derivative-based and derivative-free training methods, respectively. However, GD training algorithms are based on the first-order Taylor expansion of a nonlinear function.…”
Section: Accepted Manuscriptmentioning
confidence: 99%