“…This means that if the ML algorithms had been trained on the modified training data, it would not have exhibited the unexpected or undesirable behavior or would have exhibited this behavior to a lesser degree. Explanations generated by our framework, which complement existing approaches in XAI, are crucial for helping system developers and ML practitioners to debug ML algorithms for data errors and bias in training data, such as measurement errors and misclassifications [35,42,94], data imbalance [27], missing data and selection bias [29,62,63], covariate shift [74,82], technical biases introduced during data preparation [85], and poisonous data points injected through adversarial attacks [36,43,65,83]. It is known in the algorithmic fairness literature that information about the source of bias is critically needed to build fair ML algorithms because no current bias mitigation solution fits all situations [27,31,36,82,94].…”