example the responsible design (Dennehy et al., 2021) and governance (Mäntymäki et al., 2022b) of AI systems. While organisations are increasingly investing in ethical AI and Responsible AI (RAI) (Zimmer et al., 2022), recent reports suggest that this comes at a cost and may lead to burnout in responsible-AI teams (Heikkilä, 2022). Thus, it is critical to consider how we educate about RAI (Grøder et al., 2022) and rethink our traditional learning designs (Pappas & Giannakos, 2021), as this can influence end-users' perceptions towards AI applications (Schmager et al., 2023) as well as how future employees approach the design and implementation of AI applications (Rakova et al., 2021;Vassilakopoulou et al., 2022).The use of algorithmic decision-making and decisionsupport processes, particularly AI is becoming increasingly pervasive in the public sector, also in high-risk application areas such as healthcare, traffic, and finance (European Commission, 2020). Against this backdrop, there is growing concern over the ethical use and safety of AI, fuelled by reports of ungoverned military applications (Butcher and Beridze, 2019; Dignum, 2020), privacy violations attributed to facial recognition technologies used by the police (Rezende, 2022), unwanted biases exhibited by AI applications used by courts (Imai et al., 2020), and racial biases in clinical algorithms (Vyas et al. 2020). The opacity and lack of explainability frequently attributed to AI systems makes evaluating the trustworthiness of algorithmic decisions challenging even for technical experts, let alone the public. Together with the algorithm-propelled proliferation of misinformation, hate speech and polarising content on social media platforms, there is a high risk for erosion of trust in algorithmic systems used by the public sector (Janssen et al., 2020). Ensuring that people can trust in the algorithmic processes is essential not only for reaping the potential benefits from AI (Dignum, 2020) but also for fostering trust and resilience at a societal level.AI researchers and practitioners have expressed their fears about AI systems being developed that are Denis Dennehy