“…In SNNs, synaptic strengths are described as scalar weights that can be dynamically modified according to a particular learning rule. Actively investigated, the learning rules of SNNs can be generally categorized into three categories: conversion-based methods that map SNNs from trained ANNs (Diehl et al, 2016 ; Hunsberger and Eliasmith, 2016 ; Rueckauer et al, 2016 , 2017 ; Sengupta et al, 2019 ; Han et al, 2020 ); supervised learning with spikes that directly train SNNs using variations of error backpropagation (Lee et al, 2016 ; Shrestha and Orchard, 2018 ; Wu et al, 2018 , 2019 ; Neftci et al, 2019 ; Yin et al, 2020 ; Fang et al, 2021 ); local learning rules at synapses, such as schemes exploring the spike time dependent plasticity (STDP) (Song et al, 2000 ; Nessler et al, 2009 ; Diehl and Cook, 2015 ; Tavanaei et al, 2016 ; Masquelier and Kheradpisheh, 2018 ). In addition to the above-mentioned directions, many new algorithms have emerged, such as: a biological plausible BP implementation in pyramidal neurons based on the Bursting mechanism (Payeur et al, 2021 ); a biologically plausible online learning based on rewards and eligibility traces (Bellec et al, 2020 ); and the target-based learning in recurrent spiking networks (Ingrosso and Abbott, 2019 ; Muratore et al, 2021 ), which provides an alternative to error-based approaches.…”