“…Logic-based (or formal) explanation approaches have been studied in a growing body of research in recent years (Shih et al, 2018 , 2019 ; Ignatiev et al, 2019a , b , c , 2020a , 2022 ; Narodytska et al, 2019 ; Wolf et al, 2019 ; Audemard et al, 2020 , 2021 , 2022a , b ; Boumazouza et al, 2020 , 2021 ; Darwiche, 2020 ; Darwiche and Hirth, 2020 , 2022 ; Izza et al, 2020 , 2021 , 2022a , b ; Marques-Silva et al, 2020 , 2021 ; Rago et al, 2020 , 2021 ; Shi et al, 2020 ; Amgoud, 2021 ; Arenas et al, 2021 ; Asher et al, 2021 ; Blanc et al, 2021 , 2022a , b ; Cooper and Marques-Silva, 2021 ; Darwiche and Marquis, 2021 ; Huang et al, 2021a , b , 2022 ; Ignatiev and Marques-Silva, 2021 ; Izza and Marques-Silva, 2021 , 2022 ; Liu and Lorini, 2021 , 2022a ; Malfa et al, 2021 ; Wäldchen et al, 2021 ; Amgoud and Ben-Naim, 2022 ; Ferreira et al, 2022 ; Gorji and Rubin, 2022 ; Huang and Marques-Silva, 2022 ; Marques-Silva and Ignatiev, 2022 ; Wäldchen, 2022 ; Yu et al, 2022 ), and are characterized by formally provable guarantees of rigor, given the underlying ML models. Given such guarantees of rigor, logic-based explainability should be contrasted with well-known model-agnostic approaches to XAI (Ribeiro et al, 2016 , 2018 ; Lundberg and Lee, 2017 ; Guidotti et al, 2019 ), which offer no guarantees of rigor.…”