2007
DOI: 10.1007/s00521-007-0135-5
|View full text |Cite
|
Sign up to set email alerts
|

A new constrained learning algorithm for function approximation by encoding a priori information into feedforward neural networks

Abstract: In this paper, a new learning algorithm which encodes a priori information into feedforward neural networks is proposed for function approximation problem. The new algorithm considers two kinds of constraints, which are architectural constraints and connection weight constraints, from a priori information of function approximation problem. On one hand, the activation functions of the hidden neurons are specific polynomial functions. On the other hand, the connection weight constraints are obtained from the fir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 54 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…The MLP is used to approximate differential equations [5,6,18,19]. In this paper, weights are denoted by reversed indexes, corresponding respectively to their output and input neuron ordinal number.…”
Section: Algorithm 1: Mlp Neural Network Approximationmentioning
confidence: 99%
“…The MLP is used to approximate differential equations [5,6,18,19]. In this paper, weights are denoted by reversed indexes, corresponding respectively to their output and input neuron ordinal number.…”
Section: Algorithm 1: Mlp Neural Network Approximationmentioning
confidence: 99%
“…We apply the augmented Lagrange multiplier method to solve (18) and construct the augmented performance index φ(ω, b (1) , v, b (2) , λ, σ )…”
Section: B Training Algorithm Of Almnnmentioning
confidence: 99%
“…In Fig. 8(a) and (b), the ALMNN's training errors decrease more rapidly than those of DINN, and the convergent speed of ALMNN is faster because the constraints can narrow the solution space of the performance index expressed by (18) and avoid oscillating in the training process, which can quickly obtain the optimal parameters of the NN.…”
Section: A Training For the Almnn-based Nnmentioning
confidence: 99%
See 1 more Smart Citation
“…Second, it have not considered the network structure features as well as the involved problem properties, thus its generalization capabilities are limited. Finally, since BP algorithms are the gradient-based type learning algorithms, they converge very slowly [2][3][4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%