2012 IEEE International Symposium on Circuits and Systems 2012
DOI: 10.1109/iscas.2012.6272143
|View full text |Cite
|
Sign up to set email alerts
|

Live demonstration: Behavioural emulation of event-based vision sensors

Abstract: This demonstration shows how an inexpensive high frame-rate USB camera is used to emulate existing and proposed activity-driven event-based vision sensors. A PS3-Eye camera which runs at a maximum of 125 frames/second with colour QVGA (320×240) resolution is used to emulate several event-based vision sensors, including a Dynamic Vision Sensor (DVS), a colour-change sensitive DVS (cDVS), and a hybrid vision sensor with DVS+cDVS pixels. The emulator is integrated into the jAER software project for event-based re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 4 publications
0
19
0
Order By: Relevance
“…The simulator generates a number of spikes equal to the intensity difference divided by the chosen threshold, which are then assigned timestamps either all synchronous with the current image frame or linearly interpolated over the time period between frames. This linear interpolation would result in the generated spikes being equally distributed over the time period between frames [27]. This approach was later applied towards generating artificial, event-based datasets for visual odometry and SLAM algorithms hoping to leverage event-based techniques.…”
Section: Previous Approaches To Event Stream Simulationmentioning
confidence: 99%
See 1 more Smart Citation
“…The simulator generates a number of spikes equal to the intensity difference divided by the chosen threshold, which are then assigned timestamps either all synchronous with the current image frame or linearly interpolated over the time period between frames. This linear interpolation would result in the generated spikes being equally distributed over the time period between frames [27]. This approach was later applied towards generating artificial, event-based datasets for visual odometry and SLAM algorithms hoping to leverage event-based techniques.…”
Section: Previous Approaches To Event Stream Simulationmentioning
confidence: 99%
“…Luminance values at each pixel are first calculated using the 'perceptual' luminance of the pixel and its immediate neighborhood, followed by taking a weighted average of values in surrounding pixels. The spike trains are then generated according to similar manner as in [27], but with the threshold for event generation being compared to the minimum of a neighborhood of values rather than a single pixel. Upon validation testing, using LICE pixel values resulted in the simulated event stream having events temporally and spatially distributed more similarly to a true event stream, albeit with worse performance than log-intensity approach [29].…”
Section: Previous Approaches To Event Stream Simulationmentioning
confidence: 99%
“…Afterwards, the confusion matrix elements were averaged across all frames, which gave mean values of 10,612, 23,904, 4,890, and 3,794 true positive, true negative, false positive, and false negative event counts, respectively. Similar metrics could be computed with different DVS models 2,7 in this scenario to compare their fidelity.…”
Section: Dvs Modelmentioning
confidence: 99%
“…Since we are developing a real-time system, all events should be emitted between frames, so the maximum number of spikes per neuron per frame is N b . As in previous emulators [5], our basic emulator's output format is rate-based. Each emitted spike signifies the pixel changed by H brightness levels, where H is also the threshold.…”
Section: Fig 1: Dvs and Emulator Diagramsmentioning
confidence: 99%
“…An alternative that could reduce the cost and scarcity of DVSs while keeping spike rates low is to emulate the behaviour of a DVS. Katz et al developed a DVS emulator to test behaviours for new sensor models [5]. In their work, they transform video [at 125 frames per second (FPS)] provided by a commercial camera into a spike stream.…”
Section: Introductionmentioning
confidence: 99%