2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793887
|View full text |Cite
|
Sign up to set email alerts
|

Are We Ready for Autonomous Drone Racing? The UZH-FPV Drone Racing Dataset

Abstract: Despite impressive results in visual-inertial state estimation in recent years, high speed trajectories with six degree of freedom motion remain challenging for existing estimation algorithms. Aggressive trajectories feature large accelerations and rapid rotational motions, and when they pass close to objects in the environment, this induces large apparent motions in the vision sensors, all of which increase the difficulty in estimation. Existing benchmark datasets do not address these types of trajectories, i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
128
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 181 publications
(131 citation statements)
references
References 25 publications
1
128
0
2
Order By: Relevance
“…In addition the discussed datasets above, Table 3 presents a view for various multi-sensors agent navigation datasets, including UZH-FPV Drone Racing [ 125 ], TUM RGB-D Dataset [ 126 ], ScanNet [ 127 ], NYU V2 [ 128 ], InteriorNet [ 129 ], SceneNet RGB-D [ 130 ], and others [ 131 , 132 , 133 , 134 , 135 , 136 , 137 , 138 , 139 , 140 , 141 , 142 , 143 , 144 ], etc. These datasets provide the basic requirements of simulation and evaluation of multi-sensor fusion in experiments.…”
Section: Multi-modal Datasetsmentioning
confidence: 99%
“…In addition the discussed datasets above, Table 3 presents a view for various multi-sensors agent navigation datasets, including UZH-FPV Drone Racing [ 125 ], TUM RGB-D Dataset [ 126 ], ScanNet [ 127 ], NYU V2 [ 128 ], InteriorNet [ 129 ], SceneNet RGB-D [ 130 ], and others [ 131 , 132 , 133 , 134 , 135 , 136 , 137 , 138 , 139 , 140 , 141 , 142 , 143 , 144 ], etc. These datasets provide the basic requirements of simulation and evaluation of multi-sensor fusion in experiments.…”
Section: Multi-modal Datasetsmentioning
confidence: 99%
“…By bringing down expenses and providing reliable benchmarks, datasets as well as data simulators are elementary tools for further improvement in event-based vision. Sorted by task, they are generally divided into datasets for regression tasks [18][19][20][21] and those for classification tasks [22,23]. Simulators [24,25] [5][6][7], hardware (in yellow color) [8][9][10], and dataset (in green color) [11,12].…”
Section: A Glance Over Event-based Visionmentioning
confidence: 99%
“…In addition to the above toy examples, we compare the proposed eFMT with FMT, ORB-SLAM3, SVO and DSO on a bigger UAV dataset. Note that even though there are several public UAV datasets [40][41][42][43][44], we could not use them in this paper because we require datasets without roll/ pitch due to the properties of our algorithm. Ref.…”
Section: The Uav Datasetmentioning
confidence: 99%