2023
DOI: 10.1016/j.neunet.2023.07.008
|View full text |Cite
|
Sign up to set email alerts
|

Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“…Inspired by this, the basic idea of the proposed dynamic framework is to modify the spiking response by optimizing the membrane potential because spiking neurons determine whether to fire based on whether the membrane potential exceeds a threshold. In our previous work 25 , 27 – 29 , we implemented the attention through additional plug-and-play modules, including independent or coupled three dimensions of temporal, channel, and spatial, to learn “when”, “what” and “where” to focus on. These attention modules first capture global information of different dimensions and then use them to model the relative importance between different input moments, channels, or locations.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Inspired by this, the basic idea of the proposed dynamic framework is to modify the spiking response by optimizing the membrane potential because spiking neurons determine whether to fire based on whether the membrane potential exceeds a threshold. In our previous work 25 , 27 – 29 , we implemented the attention through additional plug-and-play modules, including independent or coupled three dimensions of temporal, channel, and spatial, to learn “when”, “what” and “where” to focus on. These attention modules first capture global information of different dimensions and then use them to model the relative importance between different input moments, channels, or locations.…”
Section: Resultsmentioning
confidence: 99%
“…To facilitate the deployment of attention SNNs on neuromorphic chips, we summarize existing methods 25 , 27 – 29 into a general attention-based dynamic framework. Figure 3 b, c show the LIF-SNN layer and dynamic SNN architecture, respectively.…”
Section: Resultsmentioning
confidence: 99%