2011
DOI: 10.1007/s11432-011-4405-6
|View full text |Cite
|
Sign up to set email alerts
|

The upper bound of the minimal number of hidden neurons for the parity problem in binary neural networks

Abstract: Binary neural networks (BNNs) have important value in many application areas. They adopt linearly separable structures, which are simple and easy to implement by hardware. For a BNN with single hidden layer, the problem of how to determine the upper bound of the number of hidden neurons has not been solved well and truly. This paper defines a special structure called most isolated samples (MIS) in the Boolean space. We prove that at least 2 n−1 hidden neurons are needed to express the MIS logical relationship … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Then, the number of nodes was gradually increased until the learning error was no longer considerably reduced. The optimal number of nodes in a hidden layer is commonly determined using Equation 1 [38,41,42].…”
Section: Topology Of the Bp Neural Network With Double Hidden Layersmentioning
confidence: 99%
“…Then, the number of nodes was gradually increased until the learning error was no longer considerably reduced. The optimal number of nodes in a hidden layer is commonly determined using Equation 1 [38,41,42].…”
Section: Topology Of the Bp Neural Network With Double Hidden Layersmentioning
confidence: 99%
“…Various efforts have been made to explore the relations between the approximation ability and the number of nodes of some specific neural network, such as singlehidden-layer feedforward neural networks (SLFNs), and two-hidden-layer feedforward neural networks with specific or conditional activation functions [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28]. For example, it was proved that N arbitrary distinct samples can be learned precisely by standard SLFNs with N hidden neurons (including biases) and the signum activation function in [12].…”
Section: Introductionmentioning
confidence: 99%
“…However, from a theoretical view, problems of the lower and upper bounds of neurons in hidden layers and the bounds of hidden layers to classification and pattern recognition architectures have not been completely studied yet. For instance, one of the current existing findings is related to binary neural networks [18]. They defined a structure named a most isolated samples in the Boolean field and proved that at least 1 2 n hidden computing@computingonline.net www.computingonline.net…”
Section: Introductionmentioning
confidence: 99%