2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00358
|View full text |Cite
|
Sign up to set email alerts
|

Event-based Video Reconstruction via Potential-assisted Spiking Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 75 publications
(33 citation statements)
references
References 51 publications
0
31
0
2
Order By: Relevance
“…Frame-based methods commonly integrate events into dense representations and adapt these representations to CNNs for further processing. Benefiting from prior knowledge contained in pretrained CNNs, these approaches achieve the highest performance on multiple vision tasks, e.g., event-based recognition [2]- [6], video reconstruction [7], [24], [25], optical flow estimation [26]- [28] . Nevertheless, they usually sacrifice the sparsity of event data, leading to redundant computation and high model complexity [13], [29], thereby limiting eventbased applications on mobile devices, edge computing, etc.…”
Section: Related Workmentioning
confidence: 99%
“…Frame-based methods commonly integrate events into dense representations and adapt these representations to CNNs for further processing. Benefiting from prior knowledge contained in pretrained CNNs, these approaches achieve the highest performance on multiple vision tasks, e.g., event-based recognition [2]- [6], video reconstruction [7], [24], [25], optical flow estimation [26]- [28] . Nevertheless, they usually sacrifice the sparsity of event data, leading to redundant computation and high model complexity [13], [29], thereby limiting eventbased applications on mobile devices, edge computing, etc.…”
Section: Related Workmentioning
confidence: 99%
“…The capacity of neural networks is surely crucial for their success, but earlier directly-trained SNNs mainly suffer from severe accuracy degradation and are limited to shallow structures and simple tasks. Inspired by the representation power of deep ANNs, more attention has been paid to the design of SNN-oriented network structures, and emerging works such as threshold-dependent batch normalization [62], [243], spiking residual learning [63], [64], attentionbased SNNs [65], [244], and spiking transformer [66], [245] are gradually reducing the performance gap compared to ANNs and demonstrating the great potential of large-scale SNNs for more complicated tasks.…”
Section: B Learning Algorithmsmentioning
confidence: 99%
“…Drawing on research in neuroscience is one of the main lines, such as modeling of spiking neurons [69], [262], learning inspired by synaptic plasticity [263], [264], etc. Another important path is to draw nutrients from the development of traditional AI, including the design of SNNs with increasing network [66] Direct Training Spiking Transformer ImageNet 74.81% Wu(2019) [67] Direct Training CNN DVS-CIFAR10 60.50% Zheng(2021) [62] Direct Training ResNet DVS-CIFAR10 67.80% Yao(2021) [68] Direct Training CNN DVS-CIFAR10 72.00% Fang(2021) [69] Direct Training ResNet DVS-CIFAR10 74.80% Meng(2022) [70] Direct Training ResNet DVS-CIFAR10 78.50% Zhou(2022) [66] Direct Training Spiking Transformer DVS-CIFAR10 80.90% He(2020) [71] Direct Training CNN DVS-Gesture 93.40% Shrestha(2018) [72] Direct Training CNN DVS-Gesture 93.64% Zheng(2021) [62] Direct Training ResNet DVS-Gesture 96.87% Fang(2021) [63] Direct Training ResNet DVS-Gesture 97.92% Yao(2022) [65] Direct Training ResNet DVS-Gesture 98.23% Zhou(2022) [66] Direct Training Spiking Transformer DVS-Gesture 98.30%…”
Section: Key Considerationsmentioning
confidence: 99%
“…where equation ( 1) is similar to the function of the recurrent layer. The membrane time constant controls the balance between remembering and forgetting −1 (Zhu, Wang, Chang, Li, Huang and Tian (2022)). Thus, it can be considered a simple version of the recurrent layer.…”
Section: Bio-inspired Srnn Modelmentioning
confidence: 99%