2012
DOI: 10.1109/jssc.2011.2167409
|View full text |Cite
|
Sign up to set email alerts
|

An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
98
0
1

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 102 publications
(100 citation statements)
references
References 41 publications
1
98
0
1
Order By: Relevance
“…In this prototype the visual processing (visual stimulus orientation detection) was performed by real-time address-event processing software (57). In future versions this processing could also be performed in neuromorphic hardware by using simple-cell orientation selectivity hardware models (58,59) or event-driven convolution chips (60).…”
Section: Discussionmentioning
confidence: 99%
“…In this prototype the visual processing (visual stimulus orientation detection) was performed by real-time address-event processing software (57). In future versions this processing could also be performed in neuromorphic hardware by using simple-cell orientation selectivity hardware models (58,59) or event-driven convolution chips (60).…”
Section: Discussionmentioning
confidence: 99%
“…Alternatively, serial AER schemes have also been proposed where a differential microstrip communicates (x, y, p) events bit-serially and asynchronously [71][72][73]. The availability of ED sensing and processing chips has allowed the implementation of first ED sensory systems [50,51,55] that show the unique pseudosimultaneity property, where the input and output event flows of a processing stage are (in practice) simultaneous or coincident in time. This is illustrated in Fig.…”
Section: Spiking Neural Network For Event-driven Sensing and Processingmentioning
confidence: 99%
“…12.1e3 shows the situation for an ED implementation. An ED processor module processes events as they flow in, with a delay typically in the 100 ns range per event [51]. The system does not need to wait for collecting image frames, but output events are emitted while the input events are processed as soon as enough input events are received, as is in cortical circuits.…”
Section: Spiking Neural Network For Event-driven Sensing and Processingmentioning
confidence: 99%
“…They addressed the need in some applications of higher speed and sensitivity by realizing that the best improvement in performance results from adding more gain and bandwidth to the photoreceptor that precedes the differencing amplifier. They have taken two approaches to this improvement but only the first is published [19]. In their pixel, they interposed two non-inverting voltage gain amplifiers between the logarithmic photoreceptor and the capacitive differencing amplifier.…”
Section: Faster and More Sensitive Dvs Pixelsmentioning
confidence: 99%
“…Instead of supplying photocurrent from the source of an nfet with feedback to the gate of the nfet, they use the photoreceptor from Oliver Landolt [19], where the feedback photocurrent is supplied from the drain of a pfet, with feedback applied to the source of the pfet. The gate of the pfet is tied to a fixed voltage, which determines the clamped photodiode voltage.…”
Section: Faster and More Sensitive Dvs Pixelsmentioning
confidence: 99%