2011
DOI: 10.1109/tnn.2010.2098481
|View full text |Cite
|
Sign up to set email alerts
|

Uniformly Stable Backpropagation Algorithm to Train a Feedforward Neural Network

Abstract: Neural networks (NNs) have numerous applications to online processes, but the problem of stability is rarely discussed. This is an extremely important issue because, if the stability of a solution is not guaranteed, the equipment that is being used can be damaged, which can also cause serious accidents. It is true that in some research papers this problem has been considered, but this concerns continuous-time NN only. At the same time, there are many systems that are better described in the discrete time domai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(9 citation statements)
references
References 35 publications
0
9
0
Order By: Relevance
“…These data will be input as training data, and we plan to perform the AI learning cycle again. The trial and improvement method is the best way to construct a high-accuracy neural network [ 13 , 14 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…These data will be input as training data, and we plan to perform the AI learning cycle again. The trial and improvement method is the best way to construct a high-accuracy neural network [ 13 , 14 ].…”
Section: Discussionmentioning
confidence: 99%
“…To optimize its ability to correctly distinguish between experienced and novice surgeons, the neural network learned via machine learning from datasets consisting of input factors from 38 participants (expert group: 11; novice group: 27) ( Figure 1 ). The backpropagation algorithm was employed as a learning strategy [ 13 ]. Learning by the system was repeated until the two groups of surgeons were correctly distinguished.…”
Section: Methodsmentioning
confidence: 99%
“…en, we put the samples into the three-layer BP neural network with only one hidden layer to train a feed-forward neural network [15] and get the classification result. e logarithm was selected as activation function and the other parameters of BP neural network were set as follows: initial weight σ � 0.9, weight adjustment speed η � 0.2, momentum factor α � 0.9, and global network error E � 0.1.…”
Section: Image Preprocessing and Classificationmentioning
confidence: 99%
“…Eq. 5is solved by a back propagation with a stochastic gradient descent [33]. In every iteration, a set of parameters (i.e., θ…”
Section: Reconstruction Based On the Autoencodermentioning
confidence: 99%