2018
DOI: 10.1109/tcsvt.2016.2630848
|View full text |Cite
|
Sign up to set email alerts
|

A Compact VLSI System for Bio-Inspired Visual Motion Estimation

Abstract: This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
9

Relationship

3
6

Authors

Journals

citations
Cited by 10 publications
(28 citation statements)
references
References 47 publications
0
28
0
Order By: Relevance
“…The virtual DVS sensor on the FPGA chip was essentially a 16 KB AER data buffer memory and it sent out stored AER event streams in a continuous manner, as a real DVS camera does. With such a virtualized sensor technique, the processing performance of the prototype in real application situations can be fairly measured, as if it was directly interfacing with a real sensor [ 36 , 37 ]. However, the virtual DVS and other components, such as the ARM core, the PC, and the Ethernet interface in Figure 6 , were only used to build the laboratory evaluation environment.…”
Section: Resultsmentioning
confidence: 99%
“…The virtual DVS sensor on the FPGA chip was essentially a 16 KB AER data buffer memory and it sent out stored AER event streams in a continuous manner, as a real DVS camera does. With such a virtualized sensor technique, the processing performance of the prototype in real application situations can be fairly measured, as if it was directly interfacing with a real sensor [ 36 , 37 ]. However, the virtual DVS and other components, such as the ARM core, the PC, and the Ethernet interface in Figure 6 , were only used to build the laboratory evaluation environment.…”
Section: Resultsmentioning
confidence: 99%
“…Spatiotemporally white noise was added to the sequences before the vision condition filtering to simulate external (physical world) noise. Finally, we applied Shi & Luo's ( Shi & Luo, 2018 ) implementation of Grzywacz and Yuille’ motion energy model (see Figure 1 ) to estimate the speed of motion in these sequences from their spatiotemporal frequency components called motion energy. We examined the relationship between spatial frequencies and speed estimation accuracy of the computational model under different simulated vision conditions and at different speeds.…”
Section: Methodsmentioning
confidence: 99%
“…We implemented the widely accepted computational motion perception model ( Adelson & Bergen, 1985 ; Grzywacz & Yuille, 1990 ) with the following modifications (see Figure 1 ). (1) The 2D spatial filters were decomposed to faster 2-stage cascaded 1D filtering ( Etienne-Cummings, Van der Spiegel, & Mueller, 1999 ; Shi & Luo, 2018 ). (2) In the pre-processing stage, we used a DoG filter to embrace a wider spatial frequency band from 0.5 to 36 cpd to facilitate successive processing.…”
Section: Methodsmentioning
confidence: 99%
“…While traditionally this would have required N S 2 N T times of multiply-and-accumulation (MAC) operations per pixel per frame ( N S and N T are filter sizes along space and time dimensions, respectively), a separable implementation can be much more efficient. Given the fact that the horizontal or vertical components of the optical flow can be computed independently from the separate horizontal or vertical motion energy channels, the 3D spatiotemporal filter can be decomposed into cascaded spatial and temporal filters [14,25]. This way, the horizontal and vertical motion energy feature maps ME X and ME Y for different spatiotemporal tuning frequencies ( f X/Y , f T ) are extracted as:ITfalse(x,y,t;fTfalse)=Ifalse(x,y,tfalse)Gaborfalse(t;fTfalse),MEXfalse(x,y,t;fS,fTfalse)=|ITfalse(x,y,t;fTfalse)Gaussfalse(yfalse)Gaborfalse(x;fXfalse)|2,MEYfalse(x,y,t;fS,fTfalse)=|ITfalse(x,y,t;fTfalse)Gaussfalse(xfalse)G…”
Section: Proposed Ttc Estimation Algorithmmentioning
confidence: 99%