2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
DOI: 10.1109/iros51168.2021.9636728
|View full text |Cite
|
Sign up to set email alerts
|

TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(25 citation statements)
references
References 12 publications
0
21
0
Order By: Relevance
“…Publicly available event-based Visual Odometry (VO) datasets are limited and/or contain low quality sensory data [50,59], either because of noisy events recorded with pioneer neuromorphic devices (such as the DAVIS240B [60] or DAVIS346B) or due to low quality frames (i.e., no gamma correction, small fill factor). Recent publicly available datasets, such as [61,62], are equipped with newer sensors, however they do not share the same optical axis, having frames and events in two image planes, standard and event-based camera separately. There is a clear need for up-to-date datasets in the event-based VO community with state-of-the-art sensors.…”
Section: Beamsplitter and Additional Experimentsmentioning
confidence: 99%
“…Publicly available event-based Visual Odometry (VO) datasets are limited and/or contain low quality sensory data [50,59], either because of noisy events recorded with pioneer neuromorphic devices (such as the DAVIS240B [60] or DAVIS346B) or due to low quality frames (i.e., no gamma correction, small fill factor). Recent publicly available datasets, such as [61,62], are equipped with newer sensors, however they do not share the same optical axis, having frames and events in two image planes, standard and event-based camera separately. There is a clear need for up-to-date datasets in the event-based VO community with state-of-the-art sensors.…”
Section: Beamsplitter and Additional Experimentsmentioning
confidence: 99%
“…Due to technical reasons, GPS is not included in the data, and ground truth is approximated by three different LiDAR SLAM algorithms. Finally, the TUM-VIE [18] dataset is captured by a pair of Prophesee Gen4 CD event cameras (1280×720, events only), a stereo camera, and a six-axis IMU. The sequences are recorded in differently scaled environments and under different motion conditions (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…The event stereo cameras have VGA resolution (640 × 480) with a horizontal baseline of about 17cm. Since the MoCap system emits 850nm infrared strobes to locate the passive spherical markers, we have adopted the common practice of putting an infrared filter (PHTODE IR690, cutoff frequency of 400-690nm) in front of the lenses [18], [21], thus blocking most of the flashing and reducing the background noise in the event stream. The reason for choosing a pair of event cameras with VGA (640 × 480) rather than HD resolution (1280 × 720) is due to the fact that we have observed a smearing effect on top of the surface of active events.…”
Section: A Sensor Setupmentioning
confidence: 99%
“…Datasets. We evaluate our stereo methods on sequences from five publicly available datasets [27], [38], [54]- [56] and a simulator. Sequences from [38], [55] were acquired with a hand-held stereo or trinocular event camera in indoor environments.…”
Section: A Datasets and Evaluation Metricsmentioning
confidence: 99%
“…We present depth estimation results using Alg. 1 on the TUM-VIE dataset [56], the first public visual-inertial dataset with 1 Megapixel stereo event cameras (Prophesee Gen4 [5]). To the best of our knowledge, our work is the first to provide results on this new event-based dataset (the original paper presented the data but did not evaluate it on any event-based algorithm).…”
Section: F Experiments On Tum-vie Datasetmentioning
confidence: 99%