“…Recent years witnessed a number of efforts towards what this paper refers to as formal XAI (Shih, Choi, and Darwiche 2018;Ignatiev, Narodytska, and Marques-Silva 2019a;Shih, Choi, and Darwiche 2019;Narodytska et al 2019;Ignatiev, Narodytska, and Marques-Silva 2019b,c;Darwiche 2020;Ignatiev 2020;Darwiche and Hirth 2020;Audemard, Koriche, and Marquis 2020;Boumazouza et al 2020;Ignatiev et al 2020a;Marques-Silva et al 2020;Izza, Ignatiev, and Marques-Silva 2020;Barceló et al 2020;Marques-Silva et al 2021;Ignatiev and Marques-Silva 2021;Asher, Paul, and Russell 2021;Wäldchen et al 2021;Huang et al 2021b;Audemard et al 2021a;Boumazouza et al 2021;Blanc, Lange, and Tan 2021;Arenas et al 2021;Darwiche and Marquis 2021;Huang et al 2022;Gorji and Rubin 2022). In contrast with other approaches to XAI, which are currently more visible, formal XAI is based on rigorously defined (and so formal) explanations, ensuring a level of rigor that directly correlates with the logic languages used for representing ML models.…”