Proceedings of the Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems
DOI: 10.1109/icmnn.1994.593243
|View full text |Cite
|
Sign up to set email alerts
|

Precision requirements for single-layer feedforward neural networks

Abstract: This paper presents a mathematical analysis of the effect of limited precision analog hardware for weight adaptation to be used in on-chip learning feed-forward neural networks. Easy-to-read equations and simple worst-case estimations f o r the maximum tolerable imprecision are presented. As an application of the analysis, a worst-case estimation on the minimum size of the weight storage capacitors is presented.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…It was shown in [5] that instead of noise during training, offsets on weight adaptations significantly degrade the learning capabilities of feed-forward neural networks; a short discussion of the results in [5] is presented furtheron in this article. In analog hardware, the offsets on weight adaptations are caused by offsets in any part of the weight adaptation circuitry.…”
Section: Introductionmentioning
confidence: 95%
See 1 more Smart Citation
“…It was shown in [5] that instead of noise during training, offsets on weight adaptations significantly degrade the learning capabilities of feed-forward neural networks; a short discussion of the results in [5] is presented furtheron in this article. In analog hardware, the offsets on weight adaptations are caused by offsets in any part of the weight adaptation circuitry.…”
Section: Introductionmentioning
confidence: 95%
“…A worst-case estimation on the maximum tolerable parasitic constant weight adaptation, i.e. the maximum t~lerable offset on the weight adaptation, is given in [5] for single neurons that use the backpropagation rule [1]. This analysis first estimates the steepness of the (local or global) minimum to be reached during training, and then estimates the maximum offset on the weight adaptations in order be able to arrive close to this (local or global) minimum.…”
Section: Introductionmentioning
confidence: 99%
“…The extreme learning machine method proposed by Huang et al [16,17] uses the single layer feed forward neural network (SLFN) architecture [18]. It randomly chooses the input weights and analytically determines the output weights of the SLFN.…”
Section: Extreme Learning Machine Methodsmentioning
confidence: 99%
“…Theorem 2. In this theorem, Annema et al [28] provide any small positive value ε > 0 and activation function g pxq : R Ñ R that is extremely differentiable in every specification; there exists L ď N such that for N arbitrary distinct input vectors tx i |x i P R n , i " 1, . .…”
Section: Concept Of Elmmentioning
confidence: 99%
“…Huang, et al [17] introduced ELM method for Single Layer Feed-forward Neural Network (SLFN) design as a learning algorithm tool [17,27,28]. The ELM algorithm has more effective general ability with faster learning speed.…”
Section: Extreme Learning Machinementioning
confidence: 99%