2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00573
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Learning of Representations for Asynchronous Event-Based Data

Abstract: Event cameras are vision sensors that record asynchronous streams of per-pixel brightness changes, referred to as "events". They have appealing advantages over frame-based cameras for computer vision, including high temporal resolution, high dynamic range, and no motion blur. Due to the sparse, non-uniform spatiotemporal layout of the event signal, pattern recognition algorithms typically aggregate events into a grid-based representation and subsequently process it by a standard vision pipeline, e.g., Convolut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
261
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 277 publications
(262 citation statements)
references
References 56 publications
0
261
0
1
Order By: Relevance
“…Reconstructed intensity image by [8]. Grid-like representations are compatible with conventional computer vision methods [83].…”
Section: Event Processingmentioning
confidence: 99%
“…Reconstructed intensity image by [8]. Grid-like representations are compatible with conventional computer vision methods [83].…”
Section: Event Processingmentioning
confidence: 99%
“…Instead of simply averaging event rates to obtain input frames, our approach generalizes to using more advanced features for event-based vision, such as time surfaces (Sironi et al, 2018), event spike tensors (Gehrig et al, 2019) or motion-based features (Clady et al, 2017). As use-cases for event-based vision are becoming increasingly challenging (Gallego et al, 2019), and neuromorphic hardware platforms become more mature (DeBole et al, 2019), our approach fills an important gap to provide powerful SNNs ready for deployment on those platforms.…”
Section: Discussionmentioning
confidence: 99%
“…# params # ops [MOps] HATS/linear SVM (Sironi et al, 2018) 90.2 --Rec. U-Net+CNN (Rebecq et al, 2019) 91.0 > 10 6 -ResNet-34 (Gehrig et al, 2019) 92. outputs, spikes are present only in the short paths from input to output of the networks. Consequently, the overall spiking activity is low, slowing down the convergence of the firing rate approximations.…”
Section: N-carsmentioning
confidence: 99%
See 1 more Smart Citation
“…TSs represent the recent history of moving edges in a compact way (a 2D grid, also called motion history image in classical vision [48]) compared to other event representations [2], [49]. We use TSs because they are memory-and computationally efficient, informative, interpretable and because they have proven to be successful for motion (optical flow) [47], [50], [51] and depth estimation [21].…”
Section: A Event Representationmentioning
confidence: 99%