Object detection is one of the essential tasks of computer vision. Object detectors based on the deep neural network have been used more and more widely in safe‐sensitive applications, like face recognition, video surveillance, autonomous driving, and other tasks. It has been proved that object detectors are vulnerable to adversarial attacks. We propose a novel black‐box attack method, which can successfully attack regression‐based and region‐based object detectors. We introduce methods to reduce search dimensions, reduce the dimension of optimization problems and reduce the number of queries by using the Covariance matrix adaptation Evolution strategy (CMA‐ES) as the primary method to generate adversarial examples. Our method only adds adversarial perturbations in the object box to achieve a precise attack. Our proposed attack can hide the specified object with an attack success rate of 86% and an average number of queries of 5, 124, and hide all objects with a success rate of 74% and an average number of queries of 6, 154. Our work illustrates the effectiveness of the CMA‐ES method to generate adversarial examples and proves the vulnerability of the object detectors against the adversarial attacks.
Recent researchers have shown that deep neural networks (DNNs) are vulnerable to adversarial exemplars, making them unsuitable for security‐critical applications. Transferability of adversarial examples is crucial for attacking black‐box models, which facilitates adversarial attacks in more practical scenarios. We propose a novel adversarial attack with high transferability. Unlike existing attacks that directly modify the input pixels, our attack is executed in the feature space. More specifically, we corrupt the abstract features by maximizing the feature distance between the adversarial example and clean images with a perceptual similarity network, inducing model misclassification. In addition, we apply a spectral transformation to the input, thus narrowing the search space in the frequency domain to enhance the transferability of adversarial examples. The disruption of crucial features in a specific frequency component achieves greater transferability. Extensive evaluations illustrate that our approach is easily compatible with many existing frameworks for transfer attacks and can significantly improve the baseline performance of black‐box attacks. Moreover, we can obtain a higher fooling rate even if the model has a defense technique. We achieve a maximum black‐box fooling rate of 61.70% on the defense model. Our work indicates that existing pixel space defense techniques are difficult to guarantee the robustness of the feature space, and the feature space from a frequency perspective is promising for developing more robust models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.