Deep neural networks-based image classification systems could suffer from adversarial attack algorithms, which generate input examples by adding deliberately crafted yet imperceptible noise to original inputs. To reduce the impact on human visual sense and to ensure adversarial attack ability, the input image needs to be modified by pixels in considerable iterations which is time consuming. By using sparse mapping network to map the input into a higher dimensional space, searching space of adversarial perturbation distribution is enlarged to better acquire perturbation information. Taking both searching speed and searching effectiveness into consideration, sparsity limitation is introduced to suppress unnecessary neurons during parameter updating process. Based on different eye sensitivity of different colors, maps of each color channel are disturbed by perturbations with different strengths to reduce visual perception. Numerical experiments confirm that compared with the state-of-the-art adversarial attack algorithms, the proposed SparseAdv performs a relatively high attack ability, better imperceptible visualization, and faster generation speed.
Deep neural networks are susceptible to interference from deliberately crafted noise, which can lead to incorrect classification results. Existing approaches make less use of latent space information and conduct pixel-domain modification in the input space instead, which increases the computational cost and decreases the transferability. In this work, we propose an effective adversarial distribution searching-driven attack (ADSAttack) algorithm to generate adversarial examples against deep neural networks. ADSAttack introduces an affiliated network to search for potential distributions in image latent space for synthesizing adversarial examples. ADSAttack uses an edge-detection algorithm to locate low-level feature mapping in input space to sketch the minimum effective disturbed area. Experimental results demonstrate that ADSAttack achieves higher transferability, better imperceptible visualization, and faster generation speed compared to traditional algorithms. To generate 1000 adversarial examples, ADSAttack takes 11.08s and, on average, achieves a success rate of 98.01%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.