The 2011 International Joint Conference on Neural Networks 2011
DOI: 10.1109/ijcnn.2011.6033443
|View full text |Cite
|
Sign up to set email alerts
|

A reversibility analysis of encoding methods for spiking neural networks

Abstract: There is much excitement surrounding the idea of using spiking neural networks (SNNs) as the next generation of function-approximating neural networks. However, with the unique mechanism of communication (neural spikes) between neurons comes the challenge of transferring real-world data into the network to process. Many different encoding methods have been developed for SNNs, most temporal and some spatial. This paper analyzes three of them (Poisson rate encoding, Gaussian receptor fields, and a dual-neuron n-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Basically, this encoding scheme requires for encoding a variable, m neurons (with Gaussian functions) used for covering the whole range of the variable, γ as a coefficient for setting the width of Gaussian functions and the encoding simulation time τ . For detailed information about GRFs and other encoding schemes authors suggest to check [12].…”
Section: Spiking Neural Networkmentioning
confidence: 99%
“…Basically, this encoding scheme requires for encoding a variable, m neurons (with Gaussian functions) used for covering the whole range of the variable, γ as a coefficient for setting the width of Gaussian functions and the encoding simulation time τ . For detailed information about GRFs and other encoding schemes authors suggest to check [12].…”
Section: Spiking Neural Networkmentioning
confidence: 99%
“…In [13] is given a detailed definition about the construction and use of the GRFs. Each input datum is fed to all the conversion neurons covering the whole data range.…”
Section: Gaussian Receptive Fieldsmentioning
confidence: 99%
“…The most widely used neuron circuits include integrate-and-fire (IF) neurons, a simplified model of a biological neuron that integrates current in a membrane capacitor and generates an action potential when the membrane voltage exceeds the threshold [14][15][16][17][18][19]. IF neurons receive and transmit a signal in various forms such as leftjustified encoding or Poisson encoding [20][21][22]. Behavior of IF neurons are proven to be equivalent to rectified linear unit (ReLU) activation function of non-SNNs, making offline learning possible by weight transfer from weights that are calculated from an external computer.…”
Section: Introductionmentioning
confidence: 99%