Advancements in deep learning have revolutionized the artificial intelligence (AI) landscape. However, despite considerable performance enhancements, their reliance on data and the intrinsic opacity of these models remains a challenge, hindering our ability to understand the reasons behind their failures. This paper introduces a headless open-source framework, coined MizSIM, built on the Unreal Engine (UE) to generate high volume and variety synthetic datasets for AI training and evaluation. Through the manipulation of agent and environment parameters, MizSIM can provide detailed performance analysis and failure diagnosis. Leveraging UE’s open-source distribution, cost-effective assets, and high-quality graphics, along with tools like AirSim and the Robotic Operating System (ROS), MizSIM ensures user-friendly design and seamless data extraction. In this article, we demonstrate two MizSIM workflows: one for a single-life computer vision task and the other to evaluate an object detector across hundreds of simulated lives. The overarching aim is to establish a closed-loop environment to enhance AI effectiveness and transparency.