2013 IEEE Intelligent Vehicles Symposium (IV) 2013
DOI: 10.1109/ivs.2013.6629568
|View full text |Cite
|
Sign up to set email alerts
|

Exploration of FPGA-based dense block matching for motion estimation and stereo vision on a single chip

Abstract: Camera-based systems in series vehicles have gained in importance in the past several years, which is documented, for example, by the introduction of front-view cameras and applications such as traffic sign or lane detection by all major car manufacturers. Besides a pure or enhanced visualization of the vehicle's environment, camera systems have also been extensively used for the design and implementation of complex driver assistance functions in diverse research scenarios, as they offer the possibility to ext… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…Delegating frame storage to off-chip memory solves the capacity problem, at the cost of performance and monetary expense. Caching techniques are used to minimize the performance implications: e.g., Sahlbach et al [ 25 ] use parallel matching arrays for accelerating computation; however, each array is only capable of holding one row of interest (the complete frame is stored in off-chip memory) and their results do not discriminate resource usage across modules, making it hard to estimate the precise array costs. This approach can only support a limited class of algorithms: column-wise operations, for instance, require off-chip memory re-ordering for data to be loaded on-chip as rows, consuming precious processing time.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Delegating frame storage to off-chip memory solves the capacity problem, at the cost of performance and monetary expense. Caching techniques are used to minimize the performance implications: e.g., Sahlbach et al [ 25 ] use parallel matching arrays for accelerating computation; however, each array is only capable of holding one row of interest (the complete frame is stored in off-chip memory) and their results do not discriminate resource usage across modules, making it hard to estimate the precise array costs. This approach can only support a limited class of algorithms: column-wise operations, for instance, require off-chip memory re-ordering for data to be loaded on-chip as rows, consuming precious processing time.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Concerning power efficiency in the form of throughput per watt, our implementations and test system achieve 0.37 (CPU) or 0.26 Hz/W (GPU). For comparison, we estimated 0.55 Hz/W (1.50 Hz/W if only frontal) for a modern implementation [18] based on an FPGA rather than general-purpose processors when applied to the same input data.…”
Section: Experimental Evaluationmentioning
confidence: 99%