2013
DOI: 10.3389/fnins.2013.00223
|View full text |Cite
|
Sign up to set email alerts
|

Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor

Abstract: Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change eve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
139
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 183 publications
(140 citation statements)
references
References 14 publications
1
139
0
Order By: Relevance
“…Table II provide the same results measured on the embedded computer. Our results are in line with the results reported in [9] of (2.2 ± 2.0) ms.…”
Section: B Resultssupporting
confidence: 83%
See 1 more Smart Citation
“…Table II provide the same results measured on the embedded computer. Our results are in line with the results reported in [9] of (2.2 ± 2.0) ms.…”
Section: B Resultssupporting
confidence: 83%
“…Using two DVS, the authors implemented a pencil-balancing system on a highly-reactive platform free to move on a plane. A robotic goalkeeper with a reaction time of 3 ms was presented in [9].…”
Section: Related Workmentioning
confidence: 99%
“…The extreme low-latency of event cameras enabled fast closeloop control (e.g., inverse pendulum balancing (Conradt et al, 2009) and goal keeping with 3 ms reaction time and only 4% CPU utilization (Delbruck and Lang, 2013)). High-frequency visual feedback (>1 kHz) enabled stable manipulator control at micrometer scale (Ni et al, 2012).…”
Section: Event-driven Vision For Robotsmentioning
confidence: 99%
“…Early event-based feature trackers were very simple and focused on demonstrating the low-latency and lowprocessing requirements of event-driven vision systems, hence they tracked moving objects as clustered blob-like sources of events [11], [12], [13], [14], [15] or lines [16].…”
Section: B Event-based Tracking Literaturementioning
confidence: 99%