Proceedings. 16th IEEE Symposium on Computer Arithmetic
DOI: 10.1109/dftvs.2003.1250128
|View full text |Cite
|
Sign up to set email alerts
|

Fault tolerant multi-layer neural networks with GA training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Finally, with Equation (10) defined, the error partial derivatives are transformed and, by employing the MIT rule, the update rules for the adaptive feedforward and adaptive feedback gain are written. This can be observed in Equations (11) and (12…”
Section: Model Reference Adaptive Controlmentioning
confidence: 75%
See 1 more Smart Citation
“…Finally, with Equation (10) defined, the error partial derivatives are transformed and, by employing the MIT rule, the update rules for the adaptive feedforward and adaptive feedback gain are written. This can be observed in Equations (11) and (12…”
Section: Model Reference Adaptive Controlmentioning
confidence: 75%
“…Recently genetic algorithms have been applied in fault-tolerant control as a strategy to optimize the controlled system in order to accommodate system failures. For instance, in [11] a FTC approach using multi-layer ANNs with a GA was presented. The proposed FTC scheme uses hardware redundancy and weight retraining based on a GA in order to reconfigure the ANN to accommodate the fault.…”
Section: Introductionmentioning
confidence: 99%
“…Approaches in this category include constraining the weights to lie within a limited range toward an even weight distribution [204]- [206]. A fourth method is to combine the training process and fault tolerance objective into an optimization problem solved by nonlinear optimization algorithms with the aim to learn a network model that performs the desired task and at the same time fulfills fault tolerance constraints [207]- [210]. A fifth method proposed in [211] considers a constructive training in the presence of faults, where neurons are incrementally added whenever the network fails to learn until a satisfactory learning or a userdefined maximum network size is reached.…”
Section: ) Model Training Modificationmentioning
confidence: 99%
“…Inconsistent mutation is adopted for maintaining diversity in the population so as to generate a new solution randomly which allows engineers to obtain global optimal solution. Based on the analysis of the weights associated with wafer fabrication process suggested by Sugawara (2003), the corresponding weight can be obtained from the weight table.…”
Section: <Insert Figure 6>mentioning
confidence: 99%