2019
DOI: 10.1038/s41598-019-40064-0
|View full text |Cite
|
Sign up to set email alerts
|

A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision

Abstract: Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels’ absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(32 citation statements)
references
References 61 publications
0
32
0
Order By: Relevance
“…As a result, it is very progressive and combines many benefits of the research it is based on. Also noteworthy are the results of Martel et al (2018) and Haessig et al (2019). Hereby, active approaches requiring additional hardware are introduced.…”
Section: Discussionmentioning
confidence: 85%
See 3 more Smart Citations
“…As a result, it is very progressive and combines many benefits of the research it is based on. Also noteworthy are the results of Martel et al (2018) and Haessig et al (2019). Hereby, active approaches requiring additional hardware are introduced.…”
Section: Discussionmentioning
confidence: 85%
“…Quite different solutions to recover depth from event-based data are shown in Martel et al (2018) and Haessig et al (2019). These are active techniques and require additional hardware, setting them apart from most investigated methods.…”
Section: Event-driven Stereoscopymentioning
confidence: 99%
See 2 more Smart Citations
“…Spiking neural networks (SNNs) [45] have been applied to various event-based fields, including low-level tasks such as optical flow estimation [46][47][48], high-level tasks such as object recognition [49,50] and classification [51], and tasks concerning the 3D structure of the scene [52,53] and robotic visual perception [54]. Benosman et al [16] used a spiking neural network that is theoretically similar to the classical Lucas-Kanade algorithm to estimate visual motion, exploiting the sparse high temporal resolution event data.…”
Section: Data-driven Approachesmentioning
confidence: 99%