“…There are three main orthogonal approaches for combating adversarial attacks -(i) Using adversarial attacks as a data augmentation mechanism by including adversarially perturbed samples in the training data to induce robustness in the trained model [41,78,79,80,10,75,8,74]; (ii) Preprocessing the input data with a denoising function or deep network [38,15,22,24,45] to counteract the effect of adversarial perturbations; and (iii) Training an auxiliary network to detect adversarial examples and deny providing inference on adversarial samples [46,37,59,60,23,39,40,35,12,25,18,1]. Our work falls under adversarial example detection as it does not require retraining the main network (as in adversarial training) nor degrade the input quality (as in preprocessing defenses).…”