New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today's virtual test environments is to provide a realistic representation of the vehicle's perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor's maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.