Camera-based systems in series vehicles have gained in importance in the past several years, which is documented, for example, by the introduction of front-view cameras and applications such as traffic sign or lane detection by all major car manufacturers. Besides a pure or enhanced visualization of the vehicle's environment, camera systems have also been extensively used for the design and implementation of complex driver assistance functions in diverse research scenarios, as they offer the possibility to extract both depth and motion information of static and moving objects. However, the evolution of existing computation-intensive vision applications from research vehicles toward series integration is currently a challenging task, which is due to the absence of highperformance computer architectures that adhere to the existing strict power and cost constraints. This paper addresses this challenge and explores FPGA-based dense block matching, which enables the calculation of depth information and motion estimation on shared hardware resources, regarding its applicability in intelligent vehicles. This includes the introduction of design scalability in time and space, thereby supporting customized application implementations and multiple camera setups. The presented modular concept also enables enhancements with pre-and post-processing features, which can be utilized to refine the obtained matching results. Its usability has been evaluated in diverse application scenarios and reaches high-performance image processing results of up to 740 GOPS at an acceptable energy level of 11 Watts, rendering it a suitable candidate for future series vehicles.