Although 3D point cloud classification has recently been widely deployed in different application scenarios, it is still very vulnerable to adversarial attacks. This increases the importance of robust training of 3D models in the face of adversarial attacks. Based on our analysis on the performance of existing adversarial attacks, more adversarial perturbations are found in the mid-and high-frequency components of input data. Therefore, by suppressing the high-frequency content in the training phase, the models' robustness against adversarial examples is improved. Experiments showed that the proposed defense method decreases the success rate of six attacks on PointNet, PointNet ++, and DGCNN models. In particular, improvements are achieved with an average increase of classification accuracy by 3.8% on drop100 attack and 4.26% on drop200 attack compared to the state-of-the-art methods. The method also improves models' accuracy on the original dataset compared to other available methods. 1
Keywords 3D deep learning • adversarial examples • frequency domain • defense 1 IntroductionRecently 3D data is often used as an input for Deep Neural Networks (DNNs) in many scenarios including healthcare, self-driving cars, drones, robotics, and many more [1,2]. These 3D data compared to 2D counterparts (which are projected form of 3D data) capture more information from the environment. Therefore, they can cause more accurate results as an output, specially in safety-critical applications such as self-driving cars. There are different representations of 3D data; including voxels, meshes, and point clouds. Since point clouds can be receipted directly from scanners, they can precisely capture shape details. Some DNNs like PointNet [3], PointNet++ [4], and DGCNN [5] are designed to feed the order-invariant point clouds into models. Despite huge success in 3D deep learning models, they are vulnerable to adversarial examples, where adversarial examples are specific inputs intentionally designed to mislead the models. Although adversarial examples and robustness against them have been analyzed in-depth for 2D data [6,7,8,9,10,11,12], they have been just started to be investigated in 3D space. In general, 2D and 3D adversarial attacks can be studied through different viewpoints.1 To facilitate research in this area, an open-source implementation of the method and data is released at https://github.com/kimianoorbakhsh/LPF-Defense.§ Indicates equal contribution.