2006
DOI: 10.1016/j.neunet.2006.01.017
|View full text |Cite
|
Sign up to set email alerts
|

Invariance priors for Bayesian feed-forward neural networks

Abstract: Neural Networks are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand this flexibility can cause over-fitting and can hamper the generalization properties of neural networks. Many approaches to regularize NN have been suggested but most of them based on ad-hoc arguments. Employing the principle of transformation invariance we derive a general prior in accordance with the Bayesian probability theory for feedforward networks. A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2007
2007
2017
2017

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 10 publications
0
11
0
Order By: Relevance
“…Again, errors play a key role in this consideration. BNNs also allow the identification of novelties since confidence measures for the reconstructed quantities are part of the BNN result [17].…”
Section: More Applications Of Bayesian Probability Theory In Fusionmentioning
confidence: 99%
“…Again, errors play a key role in this consideration. BNNs also allow the identification of novelties since confidence measures for the reconstructed quantities are part of the BNN result [17].…”
Section: More Applications Of Bayesian Probability Theory In Fusionmentioning
confidence: 99%
“…As a result, the NN's reconstruction can fail if the channel data is too different from the training set, although this may be detected [5], while the FB's has always some artifacts which must be filtered by a post-processing algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…The training and layer optimization of the network was performed using bayesian methods [5] and produced a two hidden layer network where the first hidden layer has 9 neurons and the second hidden layer also has 9 neurons. The inputs for the network are the data channels from the photodiodes and the (x, y) position of the pixel being reconstructed and the output is the value of this pixel.…”
Section: Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…This has been demonstrated by a mock bolometry example with realistic uncertainties. The straightforward next step is the addition of a suitable MCMC-method (see [13]) to improve on the presently used evidence approximations and the integration into the framework of Bayesian Neural Networks [14] to achieve real-time deconvolution capabilities with ≈ 10 3 deconvolutions/s.…”
Section: Discussionmentioning
confidence: 99%