2021
DOI: 10.1109/tnnls.2020.2979706
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Function Approximation Abilities of Neural Networks by Training Derivatives

Abstract: A method to increase the precision of feedforward networks is proposed. It requires a prior knowledge of a target function derivatives of several orders and uses this information in gradient based training. Forward pass calculates not only the values of the output layer of a network but also their derivatives. The deviations of those derivatives from the target ones are used in an extended cost function and then backward pass calculates the gradient of the extended cost with respect to weights, which can then … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
24
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(25 citation statements)
references
References 39 publications
(49 reference statements)
1
24
0
Order By: Relevance
“…The input of such network is considered as a vector of independent variables and the output as value of solution. All necessary derivatives of output with respect to input and of cost function with respect to weights can be calculated by the extended backpropagation procedure [2]. Including boundary conditions into cost function itself usually does not produce very accurate results [24], so a process of function substitution is required [18].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The input of such network is considered as a vector of independent variables and the output as value of solution. All necessary derivatives of output with respect to input and of cost function with respect to weights can be calculated by the extended backpropagation procedure [2]. Including boundary conditions into cost function itself usually does not produce very accurate results [24], so a process of function substitution is required [18].…”
Section: Introductionmentioning
confidence: 99%
“…They are propagated backward in order to obtain gradient of E with respect to the weights of each layer. The whole procedure is described in [2]. One can mention a few features of the neural network approach:…”
Section: Introductionmentioning
confidence: 99%
“…Also, the differences between the mean and the variance calculated by Equations (27) and (28) and the MC simulations are shown in Figure 2a,b.…”
Section: Ornstein-uhlenbleck Processmentioning
confidence: 98%
“…whereû andv are obtained from the neural network. The solution of u and v are chosen in such a way that it satisfies all the boundary conditions as in [24,25]. As a consequence, no component corresponding to the boundary loss is needed in the loss function.…”
Section: Pressurized Thick-cylindermentioning
confidence: 99%
“…whereφ is given by the neural network. The solution of φ is chosen in such a way that it satisfies all the boundary conditions as in [24,25]. This ensures that the boundary conditions are satisfied during the training of the network.…”
Section: One-dimensional Phase Field Modelmentioning
confidence: 99%