2020
DOI: 10.1007/978-3-030-58583-9_32
|View full text |Cite
|
Sign up to set email alerts
|

Reducing the Sim-to-Real Gap for Event Cameras

Abstract: Event cameras are paradigm-shifting novel sensors that report asynchronous, per-pixel brightness changes called 'events' with unparalleled low latency. This makes them ideal for high speed, high dynamic range scenes where conventional cameras would fail. Recent work has demonstrated impressive results using Convolutional Neural Networks (CNNs) for video reconstruction and optic flow with events. We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
174
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 110 publications
(194 citation statements)
references
References 40 publications
4
174
0
2
Order By: Relevance
“…More recently, Ref. [ 45 ] outperformed E2VID on certain event datasets by training the neural network on augmented simulated data with a wider range of event rates and contrast thresholds. We did not use [ 45 ] as we found it after we had processed our experimental data using E2VID.…”
Section: Methodsmentioning
confidence: 99%
“…More recently, Ref. [ 45 ] outperformed E2VID on certain event datasets by training the neural network on augmented simulated data with a wider range of event rates and contrast thresholds. We did not use [ 45 ] as we found it after we had processed our experimental data using E2VID.…”
Section: Methodsmentioning
confidence: 99%
“…Synth-to-Real vs Sim-to-Real. Using simulation on one side and real event data on the other indirectly introduces the sim-to-real gap studied in [11,30]. In order to understand how much this second domain shift affects performance, we compare our results with the ones which would been obtained by using simulation even to extract events from real (target) image.…”
Section: Resultsmentioning
confidence: 99%
“…Thus, the N-ROD setting is different from the one in [23], where event simulation is applied on both the source and target domains. Simultaneously, using simulation on one side and real event data on the other, N-ROD indirectly introduces the sim-to-real gap studied in [11,30]. The result is a double domain-shift, which combines both the synth-to-real and the sim-to-real shifts.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations