2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018
DOI: 10.1109/cvprw.2018.00107
|View full text |Cite
|
Sign up to set email alerts
|

Pseudo-Labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection Under Ego-Motion

Abstract: In recent years, dynamic vision sensors (DVS), also known as event-based cameras or neuromorphic sensors, have seen increased use due to various advantages over conventional frame-based cameras. Using principles inspired by the retina, its high temporal resolution overcomes motion blurring, its high dynamic range overcomes extreme illumination conditions and its low power consumption makes it ideal for embedded systems on platforms such as drones and self-driving cars. However, event-based data sets are scarce… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
35
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(47 citation statements)
references
References 31 publications
0
35
0
Order By: Relevance
“…Standard computer vision algorithms cannot be used directly to process event data (Tan et al, 2015; Iyer et al, 2018). To address this problem, we introduce three encoding approaches here as Frequency (Chen, 2018), SAE (Surface of Active Events) (Mueggler et al, 2017b) and LIF, (Leaky Integrate-and-Fire) (Burkitt, 2006) to convert the asynchronous event stream to frames (Chen et al, 2019). The event data encoding procedure is shown in Figure 1D.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Standard computer vision algorithms cannot be used directly to process event data (Tan et al, 2015; Iyer et al, 2018). To address this problem, we introduce three encoding approaches here as Frequency (Chen, 2018), SAE (Surface of Active Events) (Mueggler et al, 2017b) and LIF, (Leaky Integrate-and-Fire) (Burkitt, 2006) to convert the asynchronous event stream to frames (Chen et al, 2019). The event data encoding procedure is shown in Figure 1D.…”
Section: Methodsmentioning
confidence: 99%
“…A DAVIS346redColor sensor 4 is used for recording. Alongside the datasets, this report presents three encoding methods considering the frequency of the event (Chen, 2018), the surface of active events (Mueggler et al, 2017a) and the Leaky Integrate and Fire (LIF) neuro model (Burkitt, 2006), respectively. We conclude this report with the recording details and summaries of the datasets and encoding methods.…”
Section: Introductionmentioning
confidence: 99%
“…Considering that the occurrence frequency of an event within a given time interval can represent its probability to be a valid event instead of noise, we count the event occurrence at each pixel ( x, y ), based on which we calculate the pixel value using the following range normalization equation inspired by (Chen, 2018):…”
Section: Methodsmentioning
confidence: 99%
“…Since CNNs perform well in object detection based on traditional vision sensors, we are trying to detect objects using this method with neuromorphic vision sensors. Chen (2018) use APS images on a Recurrent Rolling convolutional neural network to produce pseudo-labels and then use them as targets for DVS data to do supervised learning with tiny YOLO architecture. The result shows that purely using DVS data, object detection can reach a truly high speed (100 FPS) in a real environment.…”
Section: Introductionmentioning
confidence: 99%
“…It stores at each location (x i , y i ) information from the events that happened there at any time t i within an established integration interval of size T . Variations of this representation have been used by many previous works, showing great performance in very different applications: optical flow estimation [37], object detection [6], classification [20,28,31] and regression tasks [24], respectively.…”
Section: Event Representationmentioning
confidence: 99%