2011
DOI: 10.1109/tnn.2011.2128344
|View full text |Cite
|
Sign up to set email alerts
|

Practical Training Framework for Fitting a Function and Its Derivatives

Abstract: This paper describes a practical framework for using multilayer feedforward neural networks to simultaneously fit both a function and its first derivatives. This framework involves two steps. The first step is to train the network to optimize a performance index, which includes both the error in fitting the function and the error in fitting the derivatives. The second step is to prune the network by removing neurons that cause overfitting and then to retrain it. This paper describes two novel types of overfitt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…The calculations of ∂L/∂W and Eq. ( 12) are performed by the backpropagation [49,50]. The algorithm of the backpropagation in the present paper is briefly explained in Appendix D. The training data set was divided into 90% for the training and 10% for the validation.…”
Section: Training Of Neural Network Toward Kinetic Energy Functional ...mentioning
confidence: 99%
See 1 more Smart Citation
“…The calculations of ∂L/∂W and Eq. ( 12) are performed by the backpropagation [49,50]. The algorithm of the backpropagation in the present paper is briefly explained in Appendix D. The training data set was divided into 90% for the training and 10% for the validation.…”
Section: Training Of Neural Network Toward Kinetic Energy Functional ...mentioning
confidence: 99%
“…In order to calculate ∂L/∂W , we need to obtain several derivatives, ∂F NN /∂W , ∂(∂F NN /∂(s 2 ))/∂W and ∂(∂F NN /∂q)∂W . In this work, the first derivative is obtained by the conventional backpropagation, and the second and the third derivatives are obtained by the backpropagation for the derivatives [49,50]. Here we briefly overview the algorithm of the backpropagation in order to explain how to compute the partial derivative of NN outputs with respect to the NN weights W .…”
Section: Appendix D: Functional Derivative Trainingmentioning
confidence: 99%
“…Among the various learning methods of ANN, Backpropagation rule [9] or delta rule is regularly utilized. The ANN reads the value of input and output from the training set of data and changes the estimation of the weighted connects to decrease the contrast between the anticipated and target esteems [10][11]. The blunder in expectation is limited to many training cycles until the point that the system achieves indicated the level of accuracy.…”
Section: Fig 1: Pictographic Representation Of Four Layered Ann Netwmentioning
confidence: 99%
“…More specifically, [11] stated that a three-layered neural network with sigmoidal units in the hidden layer can approximate continuous or other defined functions which can be defined on compact sets with any precision. However, it is worth mentioning that the application of the fundamental approximation has a number of limitations: (i) the selected network architecture is somewhat arbitrarily, and (ii) the performance of neural networks depends on the data used in training and testing [12][13][14][15].…”
Section: Neural Network Algorithmsmentioning
confidence: 99%