IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
DOI: 10.1109/ijcnn.1999.831531
|View full text |Cite
|
Sign up to set email alerts
|

Approximation of a function and its derivatives in feedforward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…For some low-dimensional cases, it allows deviations from targets to come close to the rounding error of single precision used during the training, thus addressing the gap between describing a function by an array of values and by a neural network. The concept of using derivatives for approximation [22] is quite common and was investigated for neural networks in numerous studies [23]- [29], however, the implementations of training in said papers included only low order derivatives and used somewhat small architectures, since the conditions of tests did not lead to precision gains of few orders of magnitude. Even though requirements for architectures of neural networks to approximate derivatives are usually modest [4], extra layers are sometimes necessary [30].…”
Section: Introductionmentioning
confidence: 99%
“…For some low-dimensional cases, it allows deviations from targets to come close to the rounding error of single precision used during the training, thus addressing the gap between describing a function by an array of values and by a neural network. The concept of using derivatives for approximation [22] is quite common and was investigated for neural networks in numerous studies [23]- [29], however, the implementations of training in said papers included only low order derivatives and used somewhat small architectures, since the conditions of tests did not lead to precision gains of few orders of magnitude. Even though requirements for architectures of neural networks to approximate derivatives are usually modest [4], extra layers are sometimes necessary [30].…”
Section: Introductionmentioning
confidence: 99%
“…The second, less common approach is to enforce the condition D (N ) = D (O) with non-zero right side as an addition to the objective N = O. This idea has been implemented in [12,7,21,28,63] with moderate success. The author's paper [2] showed how this method can significantly increase the accuracy of the initial objective.…”
Section: Training Derivativesmentioning
confidence: 99%
“…Now, using (9), (11), and (13), we can compute the elements of the gradient involving the weights using (8). For the elements of the gradient involving the bias terms, we can use…”
Section: A Gradient Calculationmentioning
confidence: 99%
“…In the second stage, the parameters are adjusted to improve the derivative approximation. In [11], the derivative fitting was performed by adding an extra output unit for each partial derivative to the regular structure of a feedforward neural network that was used to approximate the function. The standard backpropagation procedure was then used to train the proposed network structure.…”
Section: Introductionmentioning
confidence: 99%