In Autonomous Vehicles (AVs), one fundamental pillar is perception, which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process. Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function. We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility. We also discuss defense directions at the AV system, sensor, and machine learning model levels.
CCS CONCEPTS• Security and privacy → Domain-specific security and privacy architectures; • Computer systems organization → Neural networks.
Intentional acoustic interference causes unusual errors in the mechanics of magnetic hard disk drives in desktop and laptop computers, leading to damage to integrity and availability in both hardware and software such as file system corruption and operating system reboots. An adversary without any special purpose equipment can co-opt built-in speakers or nearby emitters to cause persistent errors. Our work traces the deeper causality of these risks from the physics of materials to the I/O request stack in operating systems for audible and ultrasonic sound. Our experiments show that audible sound causes the head stack assembly to vibrate outside of operational bounds; ultrasonic sound causes false positives in the shock sensor, which is designed to prevent a head crash.The problem poses a challenge for legacy magnetic disks that remain stubbornly common in safety critical applications such as medical devices and other highly utilized systems difficult to sunset. Thus, we created and modeled a new feedback controller that could be deployed as a firmware update to attenuate the intentional acoustic interference. Our sensor fusion method prevents unnecessary head parking by detecting ultrasonic triggering of the shock sensor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.