2019 IEEE 10th Annual Ubiquitous Computing, Electronics &Amp; Mobile Communication Conference (UEMCON) 2019
DOI: 10.1109/uemcon47517.2019.8992983
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Objective Optimization for Size and Resilience of Spiking Neural Networks

Abstract: Inspired by the connectivity mechanisms in the brain, neuromorphic computing architectures model Spiking Neural Networks (SNNs) in silicon. As such, neuromorphic architectures are designed and developed with the goal of having small, low power chips that can perform control and machine learning tasks. However, the power consumption of the developed hardware can greatly depend on the size of the network that is being evaluated on the chip. Furthermore, the accuracy of a trained SNN that is evaluated on chip can… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…GA requires no a priori knowledge about what it is trying to optimise as domain specific knowledge is contained in the fitness function and the genetic operators defined for the problem. In previous applications of GA to SNNs, [11] proposed a discrete objective function to optimise the size and resilience of SNNs, while [18] used GA to train spiking neural networks to compete for limited resources in simulated environment.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…GA requires no a priori knowledge about what it is trying to optimise as domain specific knowledge is contained in the fitness function and the genetic operators defined for the problem. In previous applications of GA to SNNs, [11] proposed a discrete objective function to optimise the size and resilience of SNNs, while [18] used GA to train spiking neural networks to compete for limited resources in simulated environment.…”
Section: Introductionmentioning
confidence: 99%
“…Despite their advantages, the modelling of SNNs poses several challenges. Given the larger number of parameters as compared to classic neural networks, a major challenge is the time-consuming process of model parameter searching [11]. Commonly, optimisation methods are required, which are processes in searching for some optimal solution(s) with respect to the model parameter(s) and some specified goal(s) (via some objective mathematical function(s)) [12].…”
Section: Introductionmentioning
confidence: 99%
“…Pruning has been extensively studied in ANNs because of its great success, whereas a very limited amount of works have been done in SNNs [15][16][17][18]. In Reference [15], authors propose pruning neurons by comparing their class-wise dominance with a certain threshold in a supervised way.…”
Section: Introductionmentioning
confidence: 99%
“…In Reference [15], authors propose pruning neurons by comparing their class-wise dominance with a certain threshold in a supervised way. In Reference [17], a post-training neuron pruning method was used to reduce the network size by comparing the spiking frequency of a neuron with average output spike frequency. In Reference [18], a supervised pruning strategy was demonstrated by evaluating the similarity between neurons and pruning similar neurons.…”
Section: Introductionmentioning
confidence: 99%
“…Many of the results providing consistent reconstruction generally assume that the system dynamics is strictly causal Granger (1969); Yue et al (2017); Gonc ¸alves and Warnick (2008); Etesami and Kiyavash (2014). However, in practice, the necessity of using models with non-necessarily strictly causal dynamics arises in many areas, such as biology Schiatti et al (2015); Faes et al (2015), finance Materassi and Innocenti (2009) or neuroscience Seth et al (2015) and brain-inspired neural network models Dimovska et al (2019). For this reason, there are methods that, at least in the linear case, try to deal with direct feedthroughs, too.…”
Section: Introductionmentioning
confidence: 99%