2016 Second International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP) 2016
DOI: 10.1109/ebccsp.2016.7605086
|View full text |Cite
|
Sign up to set email alerts
|

Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS)

Abstract: Abstract-Because standard cameras sample the scene at constant time intervals, they do not provide any information in the blind time between subsequent frames. However, for many high-speed robotic and vision applications, it is crucial to provide high-frequency measurement updates also during this blind time. This can be achieved using a novel vision sensor, called DAVIS, which combines a standard camera and an asynchronous event-based sensor in the same pixel array. The DAVIS encodes the visual content betwee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
69
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 95 publications
(74 citation statements)
references
References 21 publications
0
69
0
Order By: Relevance
“…Afterwards, they tracked these feature corners based on both frames using Kanade-Lucas-Tomasi (KLT) tracker [13]. In [14]- [16], they first extracted intensity-corners based on intensity-frames using the Harris detector [17], and then tracked these intensity-corners based on event-streams.…”
Section: A Non-synchronous Event-based Corner Detectionmentioning
confidence: 99%
“…Afterwards, they tracked these feature corners based on both frames using Kanade-Lucas-Tomasi (KLT) tracker [13]. In [14]- [16], they first extracted intensity-corners based on intensity-frames using the Harris detector [17], and then tracked these intensity-corners based on event-streams.…”
Section: A Non-synchronous Event-based Corner Detectionmentioning
confidence: 99%
“…Recently, we presented a hybrid method for feature detection and tracking for the DAVIS [18]. The method first detects and extracts features in the frames and then tracks them using only the events.…”
Section: A Event-based Feature Detection and Trackingmentioning
confidence: 99%
“…The method first detects and extracts features in the frames and then tracks them using only the events. In the present paper, we improve [18] by (i) taking into account the observation that nearby pixels typically observe events at roughly the same time and (ii) introducing a tracking refinement step that works on a slower timescale to avoid drift. Furthermore, we improve the tracking speed and add dynamic reinitialization of new features (e.g., in new areas of the scene or when features are lost).…”
Section: A Event-based Feature Detection and Trackingmentioning
confidence: 99%
See 1 more Smart Citation
“…Event cameras, such as the Dynamic Vision Sensor (DVS) [1] posses outstanding properties compared to traditional cameras: very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of µs), and do not suffer from motion blur. Hence, event cameras have a large potential to tackle challenging scenarios for standard cameras (such as high speed and high dynamic range) in tracking [2][3][4][5][6][7][8][9], depth estimation [10][11][12][13][14][15][16][17][18][19], Simultaneous Localization and Mapping [20][21][22][23][24][25][26][27], and recognition [28][29][30][31][32], among other applications. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential.…”
Section: Introductionmentioning
confidence: 99%