2007
DOI: 10.1162/neco.2007.19.5.1313
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Computation Based on Stochastic Spikes

Abstract: The speed and reliability of mammalian perception indicate that cortical computations can rely on very few action potentials per involved neuron. Together with the stochasticity of single-spike events in cortex, this appears to imply that large populations of redundant neurons are needed for rapid computations with action potentials. Here we demonstrate that very fast and precise computations can be realized also in small networks of stochastically spiking neurons. We present a generative network model for whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

2
44
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 10 publications
(46 citation statements)
references
References 35 publications
2
44
0
Order By: Relevance
“…It was previously shown how to iteratively estimate priors in a spiking neural model, 30 but our work has the goal of combining learning with inference. Our architecture is somewhat tricky.…”
Section: B Neural Implementation Of Inferencementioning
confidence: 99%
“…It was previously shown how to iteratively estimate priors in a spiking neural model, 30 but our work has the goal of combining learning with inference. Our architecture is somewhat tricky.…”
Section: B Neural Implementation Of Inferencementioning
confidence: 99%
“…low power consumption, fast inference, event-driven information processing, and massive parallelization) ( [10]). [11] presented a type of shallow neuronal network that is based on non-negative generative models ( [12,13]). Compared to other models with spiking neurons, this Spike-By-Spike (SbS) network model provides a type of network that requires relatively low additional computational requirements for using spikes as a mean for transmitting information to other neurons.…”
Section: Introductionmentioning
confidence: 99%
“…For a network consisting of one input layer and only one hidden layer, an iterative algorithm can be derived ( [11]) which has the goal to maximize the likelihood between the observed spikes, generated by the input layer, and an internal representation of the input based on hidden latent variables. The input layer is defined by an input probability distribution p µ (s) for input pattern µ to generate a (next) spike at input neuron s. For a given pattern µ in every time step t a spike is drawn at input neuron s t,µ from p µ (s).…”
Section: Introductionmentioning
confidence: 99%
“…Here we investigated the feasibility of CS with rate coding neurons. Also, we are interested in the speed-precision tradeoff of reconstructions using spike-based algorithms similar to the one we introduced previously [3]. …”
mentioning
confidence: 99%
“…Therefore the construction of sparse representations from spikes can be considered a bias favoring speed in contrast to faithfulness. In [3] we showed that learning generating matrices is possible using only spike activity. Taken together, our results underline the potential relevance of CS for understanding connectivity structures, sparseness and activity dynamics in the brain.…”
mentioning
confidence: 99%