The brain-inspired Spiking neural networks (SNN) claim to present advantages for visual classification tasks in terms of energy efficiency and inherent robustness. In this work, we explore the impact on network inter-layer sparsity through neural coding schemes and the intrinsic structural parameters of Leaky Integrate-and-Fire (LIF) neurons , which can be a candidate metric for performance evaluation. Towards this, we perform a comparative study of four critical neural coding schemes: rate coding (poisson coding), latency coding, phase coding, and direct coding, as well as 6 LIF neuron intrinsic parameter options for a total of 24 combined parameter schemes. Specifically, the models were trained using a supervised training algorithm with a surrogate gradient, and two adversarial attacks, Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) were applied on a CIFAR10 dataset. We identified the sources of interlayer sparsity in SNN, and quantitatively analyzed the differences in sparsity caused by coding schemes, neuron leakage factors and thresholds. Various aspects of network performance were thoroughly considered in this paper, including inference accuracy, adversarial robustness, and energy efficiency. Our results show that latency coding is the optimum choice in achieving the highest adversarial robustness and energy efficient against low intensity attacks, while rate coding offers the best adversarial robustness against medium and high intensity attacks. The maximum deviations of robustness and efficiency between different coding schemes are 9.35% in VGG5 and 13.59% in VGG9. Increasing the sparsity of spike activity by improving the threshold can bring a short-lived adversarial robustness sweet spot, while excessive sparsity due to changes in threshold and leakage can instead reduce the adversarial robustness. The study reveals the advantages and disadvantages, and design space of SNN in various dimensions, allowing researchers to frame their neuromorphic systems in terms of the coding methods, neuron inherent structure, and model learning capabilities.
INDEX TERMSSpiking neural network, Accuracy, Energy efficiency, Adversarial robustness, Sparsity I. INTRODUCTION S PIKING neural networks (SNN), which have spatiotemporal spiking sparsity and biological-like properties, are increasingly used to investigate more energy-efficient neural computing circuits than artificial neural networks (ANN) [1], [2]. Numerous studies have begun to use biologically plausible learning methods to implement neuromorphic circuits [3]-[5]. The SNN structure was discovered to have certain robust advantages in resisting sample noise and adversarial attacks due to its sparse inherent structure and discrete encoding of the input [6]-[9]. The main challenge in optimizing SNN is the lack of a way to achieve reliable metrics for their inference accuracy. Current performance improving methods can be broadly classified into three types: ANN-converted SNN [10], [11], surrogate gradient training methods [12], and unsupervised ...