2022
DOI: 10.1109/tits.2020.3022921
|View full text |Cite
|
Sign up to set email alerts
|

NeuroIV: Neuromorphic Vision Meets Intelligent Vehicle Towards Safe Driving With a New Database and Baseline Evaluations

Abstract: Neuromorphic vision sensors such as the Dynamic and Active-pixel Vision Sensor (DAVIS) using silicon retina are inspired by biological vision, they generate streams of asynchronous events to indicate local log-intensity brightness changes. Their properties of high temporal resolution, low-bandwidth, lightweight computation, and lowlatency make them a good fit for many applications of motion perception in the intelligent vehicle. However, as a younger and smaller research field compared to classical computer vi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(23 citation statements)
references
References 67 publications
0
22
0
1
Order By: Relevance
“…In [23], the authors compared several higher-order cognitive abilities involving manipulation and storage of visuospatial information under speeded conditions, and proposed a training intervention to improve driving skills and maintain safe driving. In [24], the neuromorphic vision sensor by mimicking human vision sensitivity to brightness changes is embedded in intelligent vehicles for the improvement of driving safety.…”
Section: Bmentioning
confidence: 99%
“…In [23], the authors compared several higher-order cognitive abilities involving manipulation and storage of visuospatial information under speeded conditions, and proposed a training intervention to improve driving skills and maintain safe driving. In [24], the neuromorphic vision sensor by mimicking human vision sensitivity to brightness changes is embedded in intelligent vehicles for the improvement of driving safety.…”
Section: Bmentioning
confidence: 99%
“…The parameters of the surrounding environment are obtained through the on-board camera, to serve automatic driving. [13][14][15][16] As required, the machine vision perception platform for automatic vehicle driving is equipped with 6 cameras, which are arranged according to three different functions: front view, measuring view and rear view. The front camera mainly captures the image information of the road in front of the vehicle, obtains the front environment according to the obtained information, and empties the information for the system's early warning and vehicle control through in-depth analysis.…”
Section: Construction Of Machine Vision Perception Platform For Vehicle Automatic Drivingmentioning
confidence: 99%
“…For example, leveraging "frequency encoding" representation from Chen et al [168], where a standard YOLOv3 CNN architecture [79] is utilized for pedestrian detection. Chen et al [169] also get the best results with the "frequency encoding" among other encoding schemes for driver monitoring applications. Perot et al [170] test different accumulation and encoding strategies for object detection, with the best results using the "discretized event volume" representation from Zhu et al [171].…”
Section: Data Representation and Processingmentioning
confidence: 99%
“…We expect to see more and more HD event camera datasets as Perot et al [170] appear in public. The published datasets are for various purposes, such as target detection [176], lane detection [177], drowsiness detection [169] etc. A comprehensive summary of the current open datasets is demonstrated in Table II.…”
Section: Applications In Autonomous Driving or Adasmentioning
confidence: 99%
See 1 more Smart Citation