Recent research has revealed that the security of deep neural networks that directly process 3D point clouds to classify objects can be threatened by adversarial samples. Although existing adversarial attack methods achieve high success rates, they do not restrict the point modifications enough to preserve the point cloud appearance. To overcome this shortcoming, two constraints are proposed. These include applying hard boundary constraints on the number of modified points and on the point perturbation norms. Due to the restrictive nature of the problem, the search space contains many local maxima. The proposed method addresses this issue by using a high step-size at the beginning of the algorithm to search the main surface of the point cloud fast and effectively. Then, in order to converge to the desired output, the step-size is gradually decreased. To evaluate the performance of the proposed method, it is run on the ModelNet40 and ScanObjectNN datasets by employing the state-of-the-art point cloud classification models; including Point-Net, PointNet++, and DGCNN. The obtained results show that it can perform successful attacks and achieve state-of-the-art results by only a limited number of point modifications while preserving the appearance of the point cloud. Moreover, due to the effective search algorithm, it can perform successful attacks in just a few steps. Additionally, the proposed step-size scheduling algorithm shows an improvement of up to 14.5% when adopted by other methods as well. The proposed method also performs effectively against popular defense methods.
The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality designed to perturb some point cloud’s features. These misclassifications may be due to the network’s overreliance on features with unnecessary information in training sets. As such, identifying the features used by deep classifiers and removing features with unnecessary information from the training data can improve network’s robustness against adversarial attacks. In this paper, the LPF-Defense framework is proposed to discard this unnecessary information from the training data by suppressing the high-frequency content in the training phase. Our analysis shows that adversarial perturbations are found in the high-frequency contents of adversarial point clouds. Experiments showed that the proposed defense method achieves the state-of-the-art defense performance against six adversarial attacks on PointNet, PointNet++, and DGCNN models. The findings are practically supported by an expansive evaluation of synthetic (ModelNet40 and ShapeNet) and real datasets (ScanObjectNN). In particular, improvements are achieved with an average increase of classification accuracy by 3.8% on Drop100 attack and 4.26% on Drop200 attack compared to the state-of-the-art methods. The method also improves models’ accuracy on the original dataset compared to other available methods. (To facilitate research in this area, an open-source implementation of the method and data is released at https://github.com/kimianoorbakhsh/LPF-Defense.).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.