2015
DOI: 10.1016/j.cplett.2015.04.019
|View full text |Cite|
|
Sign up to set email alerts
|

An implementation of the Levenberg–Marquardt algorithm for simultaneous-energy-gradient fitting using two-layer feed-forward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(27 citation statements)
references
References 25 publications
0
26
0
1
Order By: Relevance
“…The Levenberg-Marquardt (LM) training algorithm can be defined as a data driven computing method based on artificial intelligence (AI) concepts, which, more specifically, is able to correlate inversely and numerically, the nonlinear relationships between a set of individual variables (IVs) and outputs via their characteristic mathematical topology (Nguyen-Truong and Le, 2015;Ahmadi et al, 2016;Jaeel et al, 2016). The basic concept behind the LM method is to correlate the connections between IVs and model output, without assuming a prior formula defining this correlation (Sharma et al, 2017).…”
Section: The Lm Model Developmentmentioning
confidence: 99%
See 1 more Smart Citation
“…The Levenberg-Marquardt (LM) training algorithm can be defined as a data driven computing method based on artificial intelligence (AI) concepts, which, more specifically, is able to correlate inversely and numerically, the nonlinear relationships between a set of individual variables (IVs) and outputs via their characteristic mathematical topology (Nguyen-Truong and Le, 2015;Ahmadi et al, 2016;Jaeel et al, 2016). The basic concept behind the LM method is to correlate the connections between IVs and model output, without assuming a prior formula defining this correlation (Sharma et al, 2017).…”
Section: The Lm Model Developmentmentioning
confidence: 99%
“…The cross-validation set is piloted to deliver an independent check of network performance, to avoid the model overfitting and to terminate the training process at a minimum MSE error (Nguyen-Truong and Le, 2015). The third cluster testing dataset was used to evaluate the models' ability to generalise and the validity of the optimum ANN model via using the last 15% of unseen data, after selecting the appropriate network weights and biases (Ahmadi et al, 2016;Shahin, 2016).…”
Section: The Lm Model Developmentmentioning
confidence: 99%
“…The measuring accuracy indicators of the trained LM algorithm was firstly assessed during the learning or training process. The aforementioned training algorithm was used, as it is a most efficient and reliable method amongst all other feedforward artificial intelligence (AI) methods (Jeong and Kim, 2005;Nguyen-Truong and Le, 2015).…”
Section: The Lm Algorithm Measuring Performancementioning
confidence: 99%
“…Therefore, PCA and the RBF neural network model were used in this paper to achieve accurate recognition of drinking-driving behaviors. To simplify the RBF neural network and avoid long training and local minima, the LM algorithm [12][13][14] was used to train the neural network, rather than the gradient descent method.…”
Section: Drinking-driving Recognition Model Based On Pca and Rbf Neurmentioning
confidence: 99%