2001
DOI: 10.1162/089976601300014556
|View full text |Cite
|
Sign up to set email alerts
|

Minimal Feedforward Parity Networks Using Threshold Gates

Abstract: This article presents preliminary research on the general problem of reducing the number of neurons needed in a neural network so that the network can perform a specific recognition task. We consider a singlehidden-layer feedforward network in which only McCulloch-Pitts units are employed in the hidden layer. We show that if only interconnections between adjacent layers are allowed, the minimum size of the hidden layer required to solve the n-bit parity problem is n when n ≤ 4.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2001
2001
2011
2011

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…Indeed, it was reported in [22] that the most optimistic estimation for the number of the hidden neurons for the implementation of the n bit parity function using one hidden layer is √ n, (which is true for n ≤ 4), meanwhile the realistic estimation is O(n). In [14] it was shown up to n = 4, that the minimum size of the hidden layer required to solve the n-bit parity is n. It was theoretically estimated in [31] that the parity n function could be implemented using MLF with only (n + 1)/2 hidden neurons for n odd. However, this estimation was experimentally confirmed in [32] and [31] for maximum n = 7.…”
Section: Parity N Functionmentioning
confidence: 99%
“…Indeed, it was reported in [22] that the most optimistic estimation for the number of the hidden neurons for the implementation of the n bit parity function using one hidden layer is √ n, (which is true for n ≤ 4), meanwhile the realistic estimation is O(n). In [14] it was shown up to n = 4, that the minimum size of the hidden layer required to solve the n-bit parity is n. It was theoretically estimated in [31] that the parity n function could be implemented using MLF with only (n + 1)/2 hidden neurons for n odd. However, this estimation was experimentally confirmed in [32] and [31] for maximum n = 7.…”
Section: Parity N Functionmentioning
confidence: 99%
“…This result explains that sometimes restraining neurons play a special role in the construction and analysis of BNNs. To some extent, the research of this paper also expands the range of the conclusion in [23]. A conclusion is reached that n is an upper bound of the minimum number of hidden neurons for the n-bit parity problem in BNNs.…”
Section: Discussionmentioning
confidence: 57%
“…BNNs that have single hidden layer and adopt linearly separable structures are easy to implement by hardware. But the problem of how to make certain the minimum number of hidden neurons for the n-bit parity problem has remained unsolved [16][17][18][19][20][21][22][23][24]. Ref.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…These two examples one can find in any modern book on neural networks, as well as it is possible to find many specific ideas about design of the most efficient network for solving the Parity problem (see, for example Fung and Li 2001;Mizutani et al 2000). The number of linearly separable Boolean functions of n variables is very small in comparison with the number of all Boolean functions of n variables for n > 3.…”
Section: Introductionmentioning
confidence: 99%