Due to the fact that currently operating autonomous vehicles can observe only a limited area with their onboard sensors, safety regulations often dictate a very slow speed. However, as more and more sensors in the environment are available, we can fuse their information and provide extended information as a shared context model to support the autonomous vehicles. In this paper, we consider a scenario with a publicly accessible area that is populated with autonomous transport vehicles, human guided vehicles like trucks or bicycles, and pedestrians. We analyze requirements and challenges for highly dynamic context models in this scenario. Furthermore, we propose a comprehensive system architecture that can cope with these challenges, namely deterministic processing of multiple sensor updates with high throughput rates, prediction of moving objects, and on-line quality assessments, and demonstrate the feasibility of this approach by implementing the generic system architecture with laser scanners for object detection.