2018
DOI: 10.1162/neco_a_01080
|View full text |Cite
|
Sign up to set email alerts
|

A Learning Framework for Winner-Take-All Networks with Stochastic Synapses

Abstract: Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 56 publications
0
11
0
Order By: Relevance
“…A useful precedent is given by Mostafa and Cauwenberghs (2018), in which they approximate the SoftMax function with the probability that each activation in the receiving layer, Gaussian distributed at the limit of input layer size, is higher than the activation with the largest mean value. We would therefore need to solve for dropout probabilities such that the softmax approximation for each receiving neuron i is as close as possible to normalized…”
Section: Appendix A: Induction Proof For the Primary Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…A useful precedent is given by Mostafa and Cauwenberghs (2018), in which they approximate the SoftMax function with the probability that each activation in the receiving layer, Gaussian distributed at the limit of input layer size, is higher than the activation with the largest mean value. We would therefore need to solve for dropout probabilities such that the softmax approximation for each receiving neuron i is as close as possible to normalized…”
Section: Appendix A: Induction Proof For the Primary Resultsmentioning
confidence: 99%
“…Variational inference refers to the use of greatly simplified, approximate distributions to draw inferences from complex models. Variational neural networks were pioneered with the Boltzmann machine (Hinton, 2007), Helmholtz machine (Dayan et al, 1995), and their later generalization, the variational autoencoder (VAE, Kingma and Welling, 2014) and have since been further generalized to networks of many kinds (Zhang et al, 2019), including networks that imitate neurobiology (Mostafa and Cauwenberghs, 2018;Neftci et al, 2016). In these models, sampling from the variational distributions is used to approximate an intractable integral in the objective function, to learn regularized solutions, and to generate plausible out-of-sample realizations from the learned, latent distributions.…”
Section: Probabilistic Reasoningmentioning
confidence: 99%
See 1 more Smart Citation
“…Some examples can be found in [9]. We can find models using soft non-linearities [16], probabilistic models [17] or latency-based networks [18].…”
Section: Direct Trainingmentioning
confidence: 99%
“…In addition, each neuron is allowed to fire only once. Neurons belonging to the same feature map share the same input synaptic weights, and compete with each other to update their shared input synaptic weights according to a WTA mechanism [72]. The neuron that fires the earliest is the winner which is qualified to modify their shared weights according to STDP learning rule or imbalanced R-STDP learning rule.…”
Section: Convolutional Layermentioning
confidence: 99%