2012 IEEE International Symposium on Circuits and Systems 2012
DOI: 10.1109/iscas.2012.6272144
|View full text |Cite
|
Sign up to set email alerts
|

Live demonstration: Gesture-based remote control using stereo pair of dynamic vision sensors

Abstract: This demonstration shows a natural gesture interface for console entertainment devices using as input a stereo pair of dynamic vision sensors. The event-based processing of the sparse sensor output allows fluid interaction at a laptop processor load of less than 3%.Abstract-This paper describes a novel gesture interface based on a stereo pair of event-based vision sensors and neuromorphic event processing techniques. The motion trajectory of a moving hand is detected every 3 ms by spatiotemporally correlating … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 3 publications
0
16
0
Order By: Relevance
“…In [14], the high temporal resolution of the DVS was exploited for stereo matching. In [15], event histograms were used for stereo correspondence. The output of the algorithm was used for gesture recognition.…”
Section: Related Workmentioning
confidence: 99%
“…In [14], the high temporal resolution of the DVS was exploited for stereo matching. In [15], event histograms were used for stereo correspondence. The output of the algorithm was used for gesture recognition.…”
Section: Related Workmentioning
confidence: 99%
“…These methods follow a two-step approach: first they solve the event correspondence problem across image planes and then triangulate the location of the 3D point. Events are matched in two ways: either using traditional stereo methods on artificial frames generated by accumulating events over time [Schraml et al, 2010, Kogler et al, 2011a, or exploiting simultaneity and temporal correlations of the events across sensors [Kogler et al, 2011b, Rogister et al, 2012, Lee et al, 2012, Camunas-Mesa et al, 2014.…”
Section: Related Work On Event-based Depth Estimationmentioning
confidence: 99%
“…This is the main difference between our method and existing event-based depth estimation methods (Section 3). While other works essentially attempt to estimate depth by first solving the stereo correspondence problem in the image plane (using frames of accumulated events [Schraml et al, 2010, Kogler et al, 2011a, reconstructed intensity [Kim et al, 2016], temporal correlation of events [Kogler et al, 2011b, Rogister et al, 2012, Lee et al, 2012, Camunas-Mesa et al, 2014, etc. ), our method works directly in 3D space.…”
Section: Feature-viewing Rays By Event Back-projectionmentioning
confidence: 99%
“…These methods follow a two-step approach: first they solve the event correspondence problem across image planes and then triangulate the location of the 3D point. Events are matched in two ways: either using traditional stereo methods on artificial frames generated by accumulating events over time [7,11], or exploiting simultaneity and temporal correlations of the events across sensors [2,6,8,10].…”
Section: Related Work On Event-based Depth Estimationmentioning
confidence: 99%