Proceedings Fourth International Conference/Exhibition on High Performance Computing in the Asia-Pacific Region 2000
DOI: 10.1109/hpc.2000.843515
|View full text |Cite
|
Sign up to set email alerts
|

Two-stage parallel partial retraining scheme for defective multi-layer neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 4 publications
0
1
0
Order By: Relevance
“…The previous partial training was introduced to compensate stuck-defects in the conventional CMOS-based hardware systems [20]. In the previous partial training, if a neuron’s link is defected, such as stuck-at-0 or stuck-at-1, all of the links that belong to the defected neuron should be retrained again [8,9,21]. This kind of previous partial training cannot be used in the memristor crossbar, where many defects are randomly distributed over the entire crossbar [9].…”
Section: Methodsmentioning
confidence: 99%
“…The previous partial training was introduced to compensate stuck-defects in the conventional CMOS-based hardware systems [20]. In the previous partial training, if a neuron’s link is defected, such as stuck-at-0 or stuck-at-1, all of the links that belong to the defected neuron should be retrained again [8,9,21]. This kind of previous partial training cannot be used in the memristor crossbar, where many defects are randomly distributed over the entire crossbar [9].…”
Section: Methodsmentioning
confidence: 99%