Safety testing and validation of complex autonomous systems requires a comprehensive and reliable analysis of performance and uncertainty. Especially uncertainty quantification plays a vital part in perception systems operating in open context environments that are neither foreseeable nor deterministic. Therefore, safety assurance based on field tests or corner cases alone is not a feasible option as effort and potential risks are high. Simulations offer a way out. They allow, for example, simulation of potentially hazardous situations, without any real danger, by systematically computing a variety of different (input) parameters quickly. In order to do so, simulations need accurate models to represent the complex system and in particular include uncertainty as inherent property to accurately reflect the interdependence between system components and the environment. We present an approach to creating perception architectures via suitable meta-models to enable a holistic safety analysis to quantify the uncertainties within the system. The models include aleatoric or epistemic uncertainty, dependent on the nature of the approximated component. A showcase of the proposed method highlights, how validation under uncertainty can be used for a camera-based object detection.