“…Humans require a narrative form of explanation which opposes the binary nature of AI systems' outputs. As noted by Reed, Grieman, and Early (2021), most citizens would not trust any AI system if they were simply told 'We cannot explain how it works, but it is really safe'. This prompted a development of an entire field of eXplainable AI (XAI) which focuses on designing tools that can enable explanations for the decisions produced by complex autonomous systems.…”