“…Different stakeholders have different needs for explanation [12,75], but these needs are not often well-articulated or distinguished from each other [38,41,54,65,84]. Clarity on the intended use of explanation is crucial to select an appropriate XAI tool, as specialized methods exist for specific needs like debugging [39], formal verification (safety) [18,28,85], uncertainty quantification [1,79], actionable recourse [40,76], mechanism inference [20], causal inference [11,26,62], robustness to adversarial inputs [48,52], data accountability [87], social transparency [23], interactive personalization [78], and fairness and algorithmic bias [60] . In contrast, feature importance methods like LIME [66] and SHAP [49,50] focus exclusively on computing quantitative evidence for indicative conditionals [10,30] (of the form "If the applicant doesn't have enough income, then she won't get the loan approved"), with some newer counterfactual explanation methods [8,56,72] and negative contrastive methods [51] finding similar evidence for subjunctive conditionals [14,64] (of the form "If the applicant increases her income, then she would get the loan approved") .…”