2002
DOI: 10.1016/s0925-2312(01)00695-6
|View full text |Cite
|
Sign up to set email alerts
|

A deterministic algorithm that emulates learning with random weights

Abstract: The expectation of a function of random variables can be modeled as the value of the function in the mean value of the variables plus a penalty term. Here, this penalty term is calculated exactly, and the properties of different approximations are analyzed. Then, a deterministic algorithm for minimizing the expected error of a feedforward network of random weights is presented. Given a particular feedforward network architecture and a training set, this algorithm accurately finds the weight configuration that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2002
2002
2005
2005

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 17 publications
0
13
0
Order By: Relevance
“…Figure 1 shows two networks trained with the same 20 training points, randomly drawn from the function 0.3x 3 +0.3x 2 +10/(3(x+3) 2 ) in the interval [-1.5, 1.5], using a deterministic algorithm to emulate learning with random weights [5,7]: the mean of the weight distribution is adapted to minimize the average error over the distribution. The complexity of the function implemented by that mean is regulated via the variance of the distribution of the weights.…”
Section: The Phenomenon: Learned-function Invariancementioning
confidence: 99%
See 1 more Smart Citation
“…Figure 1 shows two networks trained with the same 20 training points, randomly drawn from the function 0.3x 3 +0.3x 2 +10/(3(x+3) 2 ) in the interval [-1.5, 1.5], using a deterministic algorithm to emulate learning with random weights [5,7]: the mean of the weight distribution is adapted to minimize the average error over the distribution. The complexity of the function implemented by that mean is regulated via the variance of the distribution of the weights.…”
Section: The Phenomenon: Learned-function Invariancementioning
confidence: 99%
“…This work stems from an observation we made in analyzing the behaviour of a deterministic algorithm to emulate neural learning with random weights [5,7]. We found that, for a fixed variance greater than zero, there is a number of hidden units above which the learned function does not change, or the change is slight and tends to zero as the size the network grows [6].…”
Section: Introductionmentioning
confidence: 99%
“…The last remark is that the definitions are independent from the training set, the only implicit requirement being that any non-empty training set would bring the functional distance limit to zero. , using a deterministic algorithm to emulate learning with random weights [5,6]: the mean of the weight distribution is adapted to minimize the average error over the distribution. The complexity of the function implemented by that mean is regulated via the variance of the distribution of the weights.…”
Section: The Phenomenon: Learned-function Invariancementioning
confidence: 99%
“…Even with finite training sets, it is equivalent to the addition of a penalization term [5,6], although this is not a classical regularizer because it involves the output training patterns and depends on E W ( ) . However, the function invariance property of noise addition is better understood by taking a wider perspective.…”
Section: Regularizers Implying a Target Mean Functionmentioning
confidence: 99%
See 1 more Smart Citation