2009
DOI: 10.1007/s10955-009-9822-1
|View full text |Cite
|
Sign up to set email alerts
|

Generalization Learning in a Perceptron with Binary Synapses

Abstract: We consider the generalization problem for a perceptron with binary synapses, implementing the Stochastic BeliefPropagation-Inspired (SBPI) learning algorithm which we proposed earlier, and perform a mean-field calculation to obtain a differential equation which describes the behaviour of the device in the limit of a large number of synapses N . We show that the solving time of SBPI is of order N √ log N , while the similar, well-known clipped perceptron (CP) algorithm does not converge to a solution at all in… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
26
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1

Relationship

5
2

Authors

Journals

citations
Cited by 22 publications
(26 citation statements)
references
References 18 publications
0
26
0
Order By: Relevance
“…Here, we preliminary describe a simplified version of that algorithm (called CP in [6]), and we present a circuit which implements it. The CP algorithm has a reduced performance with respect to the SBPI algorithm, but our scheme can be extended rather straightforwardly to SBPI, or a variant thereof (such as the one proposed in [7]), since it is already able to model the most crucial quantities used in the algorithm; the extension to complete SBPI algorithm will be the subject of a future work, currently in preparation. A set ξ of binary patterns is presented to a network of N .…”
Section: Learning Algorithmmentioning
confidence: 99%
“…Here, we preliminary describe a simplified version of that algorithm (called CP in [6]), and we present a circuit which implements it. The CP algorithm has a reduced performance with respect to the SBPI algorithm, but our scheme can be extended rather straightforwardly to SBPI, or a variant thereof (such as the one proposed in [7]), since it is already able to model the most crucial quantities used in the algorithm; the extension to complete SBPI algorithm will be the subject of a future work, currently in preparation. A set ξ of binary patterns is presented to a network of N .…”
Section: Learning Algorithmmentioning
confidence: 99%
“…Theoretical and numerical studies on various algorithms have found that the generalization error ε of the passive learning process decreases with the pattern density α algebraically, e.g., ε ∝ α −1 . This means that in the thermodynamic limit of N → ∞, perfect inference is unlikely to achieve at any finite value of pattern density α [13][14][15][16][17][18][19].…”
Section: Introductionmentioning
confidence: 99%
“…In nature, synaptic weights are known to be plastic, low precision and unreliable, and it is an interesting issue to understand if this stochasticity can help learning or if it is an obstacle. The debate about this issue has a long history and is still unresolved (see [1] and references therein). Here, we provide quantitative evidence that the stochasticity associated with noisy low precision synapses can drive elementary supervised learning processes towards a particular type of solutions which, despite being rare, are robust to noise and generalize well -two crucial features for learning processes.In recent years, multi-layer (deep) neural networks have gained prominence as powerful tools for tackling a large number of cognitive tasks [2].…”
mentioning
confidence: 99%
“…Also notice that for any maximizer W of Problem (1) we have that δ (W − W ) is a maximizer of Problem (3) provided that it belongs to the parametric family, as can be shown using Jensen's inequality. Problem (3) is a "distributional" relaxation of Problem (1).…”
mentioning
confidence: 99%
See 1 more Smart Citation