<div class="section abstract"><div class="htmlview paragraph">With further development of autonomous vehicles additional challenges appear. One of these challenges arises in the context of mixed traffic scenarios where automated and autonomous vehicles coexist with manually operated vehicles as well as other road users such as cyclists and pedestrians. In this evolving landscape, understanding, predicting, and mimicking human driving behavior is becoming not only a challenging but also a compelling facet of autonomous driving research. This is necessary not only for safety reasons, but also to promote trust in artificial intelligence (AI), especially in self-driving cars where trust is often compromised by the opacity of neural network models. The central goal of this study is therefore to address this trust issue. A common approach to imitate human driving behavior through expert demonstrations is imitation learning (IL). However, balancing performance and explainability in these models is a major challenge. To efficiently generate training data, researchers have turned to simulation environments because collecting data in the real world is not only costly and time-consuming, but also potentially dangerous. Simulations provide a controlled and scalable platform for training reliable AI agents. The goal of this research is to bridge the gap between IL, explainability, and trust in AI-controlled vehicles navigating mixed traffic scenarios. Our proposed approach involves a novel fusion of explainable neural network architectures with parameterization techniques that enable precise control of learned driving behavior. By using advanced simulation environments and a variety of interconnected simulators that provide different levels of immersion, we intend to collect a wide range of information and training data. This wealth of knowledge will allow us to draw conclusions about the effectiveness of these simulator methods and ensure the generalizability of our model.</div></div>