Proceedings Pacific Rim International Symposium on Fault-Tolerant Systems
DOI: 10.1109/prfts.1997.640150
|View full text |Cite
|
Sign up to set email alerts
|

Fault tolerant constructive algorithm for feedforward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(29 citation statements)
references
References 10 publications
0
29
0
Order By: Relevance
“…In (Hammadi, Ohmameunda, Kaneo, & Ito, 1998), a dynamic constructive algorithm is used to construct fault tolerant feedforward networks. This dynamic constructive fault tolerant algorithm (DCFTA) estimates a relevance factor for each weight and uses it to update the weights in a selective manner.…”
Section: Discussionmentioning
confidence: 99%
“…In (Hammadi, Ohmameunda, Kaneo, & Ito, 1998), a dynamic constructive algorithm is used to construct fault tolerant feedforward networks. This dynamic constructive fault tolerant algorithm (DCFTA) estimates a relevance factor for each weight and uses it to update the weights in a selective manner.…”
Section: Discussionmentioning
confidence: 99%
“…A method to improve the fault tolerance of backpropagation networks is presented in [30], which restrained the magnitudes of the connections during training process. Hammadi and Ito [16] demonstrate a training algorithm that reduces the relevance of weight. In [16], relevance of weight in each training epoch was estimated, and then decreases the magnitude of weight IV.…”
Section: Related Workmentioning
confidence: 99%
“…Hammadi and Ito [16] demonstrate a training algorithm that reduces the relevance of weight. In [16], relevance of weight in each training epoch was estimated, and then decreases the magnitude of weight IV.…”
Section: Related Workmentioning
confidence: 99%
“…Improve tolerance of a neural network towards random node fault, stuck-at node fault and weight noise have been researching for almost two decades [4,6,5,7,9,12,13,16,17,19], Many methods such as injecting random node fault [18,3], injecting weight noise during training (for multilayer perceptrons (MLP) [14,15], a recurrent neural network (RNN) [11], or a pulse-coupled neural networks (PCNN) [8]) or node noise (response variability) during training [2] (for PCNN) during training have been developed and demonstrated with success via intensive computer simulations. Despite the idea of injecting weight noise during training is straight forward and its implementation is extremely elegant, theoretical analysis regrading their convergence and the objective functions in which the algorithms are minimizing is scarce [1,2,14,15].…”
Section: Introductionmentioning
confidence: 99%