2018
DOI: 10.1007/978-3-319-74690-6_24
|View full text |Cite
|
Sign up to set email alerts
|

Trained Neural Networks Ensembles Weight Connections Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…This is interesting in itself, but will hopefully also open up novel pathways for the mathematical analysis of the behaviour of spiking neural networks trained using Hebbian learning. In particular, it may lead to novel insights about the statistical distribution of weights, which has been of substantial research interest [28,29,30,31]. Secondly and more practically relevant, as we discuss below, our results can be used to adapt the learning rate of the neuron.…”
Section: Introductionmentioning
confidence: 81%
“…This is interesting in itself, but will hopefully also open up novel pathways for the mathematical analysis of the behaviour of spiking neural networks trained using Hebbian learning. In particular, it may lead to novel insights about the statistical distribution of weights, which has been of substantial research interest [28,29,30,31]. Secondly and more practically relevant, as we discuss below, our results can be used to adapt the learning rate of the neuron.…”
Section: Introductionmentioning
confidence: 81%
“…Suppose the two output neurons are y (0) and y (1) , and assume y (0) > y (1) without loss of generality. When there are random errors injected to the neural network, the two output neurons follow normal distribution accordingly and they can be formulated with Y (0) ∼ N (y (0) , var(∆ y ), Y (1) ∼ N (y (1) , var(∆ y )) according to Section III-B. In this case, the probability of wrong classification is essentially that of Y (0) < Y (1) .…”
Section: Relation Between Rrmse and Classification Accuracymentioning
confidence: 99%
“…When there are random errors injected to the neural network, the two output neurons follow normal distribution accordingly and they can be formulated with Y (0) ∼ N (y (0) , var(∆ y ), Y (1) ∼ N (y (1) , var(∆ y )) according to Section III-B. In this case, the probability of wrong classification is essentially that of Y (0) < Y (1) . The distribution of Y (0) < Y (1) can be formulated with Equation 13, which is a shifted and scaled normal distribution model.…”
Section: Relation Between Rrmse and Classification Accuracymentioning
confidence: 99%
See 1 more Smart Citation
“…But normal and uniform distribution are commonly used initialization weight distributions [16,19,20]. Meanwhile, [1] shows that many weights may fit the t location-scale distribution. When w follows the above distribution, h and g bring strong correlation with the constraint µ |x| = µ |w| .…”
Section: Similar Operationsmentioning
confidence: 99%