Sensing is an essential part in autonomous driving and intelligent transportation systems. It enables the vehicle to better understand itself and its surrounding environment. Vehicular networks support information sharing among different vehicles and hence enable the multi-vehicle multi-sensor cooperative sensing, which can greatly improve the sensing performance. However, there are a couple of issues to be addressed. First, the multi-sensor data fusion needs to deal with heterogeneous data formats. Second, the cooperative sensing process needs to deal with low data quality and perception blind spots for some vehicles. In order to solve the above problems, in this paper the occupancy grid map is adopted to facilitate the fusion of multi-vehicle and multi-sensor data. The dynamic target detection frame and pixel information of the camera data are mapped to the static environment of the LiDAR point cloud, and the space-based occupancy probability distribution kernel density estimation characterization fusion data is designed , and the occupancy grid map based on the probability level and the spatial level is generated. Real-world experiments show that the proposed fusion framework is better compatible with the data information of different sensors and expands the sensing range by involving the collaborations among multiple vehicles in vehicular networks.