1997
DOI: 10.1103/physreve.55.7434
|View full text |Cite
|
Sign up to set email alerts
|

Finite size scaling of the Bayesian perceptron

Abstract: We study numerically the properties of the bayesian perceptron through a gradient descent on the optimal cost function. The theoretical distribution of stabilities is deduced. It predicts that the optimal generalizer lies close to the boundary of the space of (error-free) solutions. The numerical simulations are in good agreement with the theoretical distribution. The extrapolation of the generalization error to infinite input space size agrees with the theoretical results. Finite size corrections are negative… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
13
0

Year Published

1998
1998
2002
2002

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 13 publications
0
13
0
Order By: Relevance
“…The latter quantity itself is influenced by finite size effects. Extensive numerical simulations show that the corrections are linear in 1/N [24][25][26] and hence they are negligible after clipping and getting ρ W (As in Eq. 12).…”
Section: Finite Systems -Perfect Learningmentioning
confidence: 99%
“…The latter quantity itself is influenced by finite size effects. Extensive numerical simulations show that the corrections are linear in 1/N [24][25][26] and hence they are negligible after clipping and getting ρ W (As in Eq. 12).…”
Section: Finite Systems -Perfect Learningmentioning
confidence: 99%
“…The variational optimization of R with respect to the choice of V can now be performed as in refs. [15,8,16,17] invoking the Schwarz inequality. We only quote the final result for the resulting overlap R opt at the minimum of this optimal potential:…”
mentioning
confidence: 99%
“…Both these corrections are of order O(1/ √ n). This behaviour, numerically verified within several learning scenarios (Buhot, Torres Moreno, & Gordon, 1997;Nadler & Fink, 1997;Schroder & Urbanczik, 1998), shows that the predictions of the statistical mechanics approach are better for larger n.…”
Section: Statistical Mechanicsmentioning
confidence: 82%