The requirements for a vehicle environment representation increase with the complexity of advanced driver assistance systems and automatic driving. The ability of the current traffic situation to interpret and predict is essential for being able to automatically derive reasonable decisions. As a consequence, a state of the art vehicle environment representation has to incorporate all relevant dynamic objects as well as static obstacles and context information. While dynamic objects are typically described by an objectbased representation using state variables, static obstacles as well as free space area are commonly modeled using grid-based methods. This chapter gives an introduction into both of these concepts.The chapter is organized as follows: First, the difference between function-oriented and modular fusion architectures is discussed. Afterwards, the joint integrated probabilistic data association (JIPDA) filter is introduced, which is one method to realize an object-based environment model incorporating both state and existence uncertainties. Further, the representation of static obstacles with occupancy grids is described in detail and the incorporation of measurements of different sensor types is illustrated. Finally, several hybrid environment representations are introduced and an example for a strictly modular architecture, the hierarchical modular environment perception, is presented.
Requirements for Vehicle Environment RepresentationsA vehicle environment representation, often also referred to as a vehicle environment model, is understood to be a dynamic data structure in which all relevant objects and infrastructure elements in the proximity of the ego vehicle are contained. All elements have to be represented in a common reference system as accurately as possible with regard to position and time. The detection and the temporal tracking of the objects and infrastructure elements are performed continuously using on-board sensors such as cameras and radar (see chapters "▶ Automotive Radar", "▶ Automotive Lidar", "▶ 3-D-Imaging for ADAS", "▶ Automotive Camera (Hardware)" and "▶ Fundamentals of Machine Vision"). In the future increasingly more information from high-precision, attributed digital maps and, if applicable, external information based on Car2x communication will be available and can be incorporated in the data fusion. Figure 1 shows examples of common elements contained in a vehicle environment representation.The objects and structure elements which are relevant for a vehicle environment representation depend to a large extent on the functions that will be implementing it. For example, a blind spot assist requires only the information on whether there are currently objects in the rear or side area of the vehicle, the type of object is immaterial. More complex assistance systems, for example automatic emergency steer assist all the way through to automatic driving, require more extensive perceptiveness and information. In these cases, the distances, speeds, and dimensions of all traff...