“…The evaluation of explainability of DNN models is known to be a challenging task, necessitating such an effort. From another perspective, while there have been many surveys of literature on adversarial attacks and robustness [7,8,11,25,29,35,46,51,57,61,65,69,75,77,101,104,112,113,116,118,119,121,122,129,135] -which focus on attacking the predictive outcome of these models, there have been no effort so far to study and consolidate existing efforts on attacks on explainability of DNN models. Many recent efforts have demonstrated the vulnerability of explanations (or attributions 1 ) to human-imperceptible input perturbations across image, text and tabular data [36,45,55,62,107,108,133].…”