“…Different types of defenses have emerged trying to address this shortcoming, with perhaps the most successful of them being adversarial training [5], [6], [7], [8] and defensive distillation [9], [10]. However, even though neural networks with these defenses empirically show superior performance against adversarial attacks than those without such approaches, these methods do not broadly provide either design insights or formal guarantees on robustness.…”