“…(B) Mapping of features. Boumazouza et al, 2020Boumazouza et al, , 2021Darwiche, 2020;Darwiche andHirth, 2020, 2022;Izza et al, 2020Izza et al, , 2022a, 2021Rago et al, 2020Rago et al, , 2021Shi et al, 2020;Amgoud, 2021;Arenas et al, 2021;Asher et al, 2021;Blanc et al, 2021Blanc et al, , 2022aCooper and Marques-Silva, 2021;Darwiche and Marquis, 2021;Huang et al, 2021aHuang et al, ,b, 2022Ignatiev and Marques-Silva, 2021;Marques-Silva, 2021, 2022;Lorini, 2021, 2022a;Malfa et al, 2021;Wäldchen et al, 2021;Amgoud and Ben-Naim, 2022;Ferreira et al, 2022;Gorji and Rubin, 2022;Marques-Silva and Ignatiev, 2022;Wäldchen, 2022;Yu et al, 2022), and are characterized by formally provable guarantees of rigor, given the underlying ML models. Given such guarantees of rigor, logic-based explainability should be contrasted with well-known model-agnostic approaches to XAI (Ribeiro et al, 2016(Ribeiro et al, , 2018Lundberg and Lee, 2017;Guidotti et al, 2019), which offer no guarantees of rigor.…”