2019
DOI: 10.1002/widm.1310
|View full text |Cite
|
Sign up to set email alerts
|

Neuromorphic vision: From sensors to event‐based algorithms

Abstract: Regardless of the marvels brought by the conventional frame‐based cameras, they have significant drawbacks due to their redundancy in data and temporal latency. This causes problem in applications where low‐latency transmission and high‐speed processing are mandatory. Proceeding along this line of thought, the neurobiological principles of the biological retina have been adapted to accomplish data sparsity and high dynamic range at the pixel level. These bio‐inspired neuromorphic vision sensors alleviate the m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 25 publications
(14 citation statements)
references
References 107 publications
(157 reference statements)
0
14
0
Order By: Relevance
“…Unfortunately, the frame-based vision has some disadvantages, e.g., high data redundancy, high bandwidth demand in short-latency usecases, or limited dynamic range [41]. The DVS (see Figure 10), sometimes called "silicon retina" [41,42], functions differently. Each pixel of the sensor operates separately and elicits its events immediately when pixel illuminance changes [43].…”
Section: Experimental Testsmentioning
confidence: 99%
“…Unfortunately, the frame-based vision has some disadvantages, e.g., high data redundancy, high bandwidth demand in short-latency usecases, or limited dynamic range [41]. The DVS (see Figure 10), sometimes called "silicon retina" [41,42], functions differently. Each pixel of the sensor operates separately and elicits its events immediately when pixel illuminance changes [43].…”
Section: Experimental Testsmentioning
confidence: 99%
“…Over the last decade, an increasing number of studies have used event-based data for computer vision, with performances sometimes better than those obtained from more classical frame-based cameras in applications like object recognition (Neil and Liu, 2016;Stromatias et al, 2017), or visual odometry (Gallego and Scaramuzza, 2017;Nguyen et al, 2019). These studies were all based on deep convolutional neural networks or SNNs, coupled with supervised learning or classification approaches (see (Lakshmi et al, 2019)). For example, (Zhu et al, 2019) used an artificial neural network (ANN) to predict the optic flow from event-based data collected from a camera mounted on the top of a car moving within an urban environment (see also (Zhu et al, 2018)).…”
Section: Related Workmentioning
confidence: 99%
“…Event camera-based algorithms for single or multiple object detection, pose estimation, and tracking (MOT) can be classified into three categories: feature-based, artificial neural network-based, and time surface-based [35]. Studies focusing on robot pose estimation using event cameras have been reported in the literature [59]- [61].…”
Section: A Robotic Systems With Event Camerasmentioning
confidence: 99%
“…The event camera used in this study was affixed to a stationary mount on the ceiling to provide a fixed frame of reference. When an event camera moves, the background suffers from clutter, making it difficult to distinguish the object of interest [35].…”
Section: Introductionmentioning
confidence: 99%