2015 European Conference on Mobile Robots (ECMR) 2015
DOI: 10.1109/ecmr.2015.7324048
|View full text |Cite
|
Sign up to set email alerts
|

Towards evasive maneuvers with quadrotors using dynamic vision sensors

Abstract: We present a method to predict collisions with objects thrown at a quadrotor using a pair of dynamic vision sensors (DVS). Due to the micro-second temporal resolution of these sensors and the sparsity of their output, the object's trajectory can be estimated with minimal latency. Unlike standard cameras that send frames at a fixed frame rate, a DVS only transmits pixel-level brightness changes ("events") at the time they occur. Our method tracks spherical objects on the image plane using probabilistic trackers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
30
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(31 citation statements)
references
References 24 publications
1
30
0
Order By: Relevance
“…The times at which the room lights are switched off and on again are marked. As observed, EVO tracks the whole sequence with remarkable accuracy (6 cm drift in translation, and a few degrees (3 • ) in rotational drift, over a 30 m trajectory, that is, 0.2 % relative position error). Note that we do not perform any map or pose refinement (e.g., bundle adjustment); doing so would further reduce the drift.…”
Section: A Accuracy Evaluationsupporting
confidence: 67%
See 1 more Smart Citation
“…The times at which the room lights are switched off and on again are marked. As observed, EVO tracks the whole sequence with remarkable accuracy (6 cm drift in translation, and a few degrees (3 • ) in rotational drift, over a 30 m trajectory, that is, 0.2 % relative position error). Note that we do not perform any map or pose refinement (e.g., bundle adjustment); doing so would further reduce the drift.…”
Section: A Accuracy Evaluationsupporting
confidence: 67%
“…(i) Notably, event cameras have negligible latency (microseconds), and so they have the potential to enable fast maneuvers of robotic platforms [2], [3], which are currently not possible with standard cameras because of the high latency of the sensing and processing pipeline (in the order of tens of milliseconds).…”
Section: Introductionmentioning
confidence: 99%
“…These devices produce an asynchronous stream of events encoding brightness changes incurred at specific pixels, allowing very fast response times and dynamic ranges. DVS have been demonstrated in agile UAVs maneuvers [17], orientation tracking [7], [11], visual odom- etry [13], 6DOF tracking and SLAM [12], [20]. However, DVS based systems still require the actual processing of visual data to be conducted on a separate external device, often via having to reconstruct a whole image.…”
Section: Introductionmentioning
confidence: 99%
“…The parallel nature of the SCAMP vision sensor allows various basic image processing tasks to be conducted with minimal computational overhead 1 . Efficient asynchronous flood fill of a binary black and white image is one such task, and it is this capability which is exploited in the target tracking algorithm implemented for this work.…”
Section: B Vision Algorithmmentioning
confidence: 99%
“…Recent developments with dynamic vision sensors (DVS), offer some insight into the possibilities for developing more efficiently perceiving robotic systems. Examples for UAVs include works such as [1] for evasive maneuvers, [2] for agile visual odometry and [3] for landing from optic flow. When using a DVS, however, both basic and more visually complex tasks such as target tracking and/or combinations with lower visual competences require separate and more conventional processing architectures.…”
Section: Introductionmentioning
confidence: 99%