“…Despite the unprecedented progress of Deep Neural Networks (DNNs) [20,21,23], the vulnerability to adversarial examples [17,39] poses serious threats to security-sensitive applications, e.g., face recognition [34], autonomous driving [16], etc. To securely deploy DNNs in various applications, it is necessary to conduct an in-depth analysis on the intrinsic properties of adversarial examples, which has inspired numerous researches on adversarial attacks [3-6, 9, 12, 14, 29, 30, 42] and defenses [19,28,36,44,45,49]. Existing attacks could be split into two categories: white-…”