2016
DOI: 10.3389/fnins.2016.00241
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

Abstract: Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask ove… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
103
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 119 publications
(106 citation statements)
references
References 73 publications
2
103
0
1
Order By: Relevance
“…When computations are local (as in neuromorphic hardware), there is no such overhead and weight updates can be performed online without loss in speed. In fact, our empirical observations confirm that spiking networks often require fewer iterations of the dataset to reach the peak classification performance compared with the artificial neural network trained with batch gradient descent (Neftci et al., 2016, Neftci et al., 2017). This improved speed in learning is visible in the middle panel of (Figure 1) and can directly translate into power reductions on dedicated hardware.…”
Section: Embedded Learning Rules For Resource-constrained Learningsupporting
confidence: 72%
“…When computations are local (as in neuromorphic hardware), there is no such overhead and weight updates can be performed online without loss in speed. In fact, our empirical observations confirm that spiking networks often require fewer iterations of the dataset to reach the peak classification performance compared with the artificial neural network trained with batch gradient descent (Neftci et al., 2016, Neftci et al., 2017). This improved speed in learning is visible in the middle panel of (Figure 1) and can directly translate into power reductions on dedicated hardware.…”
Section: Embedded Learning Rules For Resource-constrained Learningsupporting
confidence: 72%
“…We find that both networks require roughly the same number of operations to reach the same accuracy during learning. This SynOp-MAC parity was also reported in synaptic sampling machines (Neftci E. O. et al, 2016). There, it was argued that SynOp-MAC parity is very promising for hardware implementations because a SynOp in dedicated hardware potentially consumes much less power than a MAC in a general purpose digital processor.…”
Section: Efficiency In Learning: Achieving Synop-mac Paritymentioning
confidence: 53%
“…All learning rates were kept fixed during the simulation. Other I&F neuron related parameters were carried over from previous work (Neftci E. O. et al, 2016) and not specifically tuned for eRBP. To prevent the network from learning (spurious) transitions between digits, the synaptic weights did not update in the first 50ms window of each digit presentation.…”
Section: Experimental Setup and Software Simulationsmentioning
confidence: 99%
“…In fact, device runtime stochasticity may be considered a favourable property PersPective Nature electroNics that mimics real biological synapses and can act as a regularizer during training 26 . In addition, practical network operations do not require years of data retention, as in the case of storage systems, and requirement of device endurance may also be relaxed, since weight updates are often infrequent 27 .…”
Section: Nature Electronicsmentioning
confidence: 99%