2010
DOI: 10.4314/ijest.v2i2.59147
|View full text |Cite
|
Sign up to set email alerts
|

Effect of training algorithms on neural networks aided pavement diagnosis

Abstract: Routine pavement maintenance necessitates present structural diagnosis and condition evaluation of pavements by employing non-destructive test equipment such as the Falling Weight Deflectometer (FWD). FWD testing of pavements involves measuring time-domain surface deflections resulting from applied impulse loading on the pavement. Through inverse analysis of FWD deflection data, the stiffness parameters of the individual pavement layers are, in general, determined using iterative optimization routines. In rece… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(11 citation statements)
references
References 6 publications
0
11
0
Order By: Relevance
“…Scaled conjugate gradient backpropagation (SCG-BP) algorithm is a fully automated method, which was designed to avoid the time consuming line search often used in CGB and quasi-Newton BP algorithms [ 56 ]. We adopted SCG-BP to train the designed MLP-ANN in this work so as to take advantage of its well acclaimed speed of convergence [ 30 , 57 ]. The number of neurons in the hidden layer of our MLP was determined experimentally because there is currently no precise rule of thumb for selecting the number of hidden layer neurons [ 58 ].…”
Section: Methodsmentioning
confidence: 99%
“…Scaled conjugate gradient backpropagation (SCG-BP) algorithm is a fully automated method, which was designed to avoid the time consuming line search often used in CGB and quasi-Newton BP algorithms [ 56 ]. We adopted SCG-BP to train the designed MLP-ANN in this work so as to take advantage of its well acclaimed speed of convergence [ 30 , 57 ]. The number of neurons in the hidden layer of our MLP was determined experimentally because there is currently no precise rule of thumb for selecting the number of hidden layer neurons [ 58 ].…”
Section: Methodsmentioning
confidence: 99%
“…Training an ANN involves the minimization of an error function that depends on the network’s synaptic weights (the w’s in Figure 2 ), and these weights are iteratively updated by a learning algorithm, to approximate the target variable. The updating is usually accomplished by back-propagating the error, which is essentially a non-linear least-squares problem [ 19 ]. Back-propagation is a supervised learning algorithm based on a suitable error function, the values of which are determined by the target (i.e., marbling EPD) and the mapped (predicted) outputs of the network (i.e., fitted values of marbling EPD).…”
Section: Methodsmentioning
confidence: 99%
“…This technique is fast and requires less memory use in analysis. Also, Gopalakrishnan () showed this algorithm performs better in developing prediction model in comparison with other error minimization techniques.…”
Section: Selection Of Nn Algorithmmentioning
confidence: 97%