2019
DOI: 10.48550/arxiv.1906.09765
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MobilBye: Attacking ADAS with Camera Spoofing

Abstract: Advanced driver assistance systems (ADASs) were developed to reduce the number of car accidents by issuing driver alert or controlling the vehicle. In this paper, we tested the robustness of Mobileye, a popular external ADAS. We injected spoofed traffic signs into Mobileye to assess the influence of environmental changes (e.g., changes in color, shape, projection speed, diameter and ambient light) on the outcome of an attack. To conduct this experiment in a realistic scenario, we used a drone to carry a portab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 4 publications
0
13
0
Order By: Relevance
“…First, harsh weather conditions such as foggy and snowy weather could reduce the capabilities of the intelligent sensors. Besides that, physical noise or fake signal data, namely, jamming attack [13][14][15][16] and spoofing attack [13,[17][18][19] (see details in §6.1.2) may also exist in the driving environment, and could interfere these sensors and harm their normal functionalities.…”
Section: The Threat Modelmentioning
confidence: 99%
“…First, harsh weather conditions such as foggy and snowy weather could reduce the capabilities of the intelligent sensors. Besides that, physical noise or fake signal data, namely, jamming attack [13][14][15][16] and spoofing attack [13,[17][18][19] (see details in §6.1.2) may also exist in the driving environment, and could interfere these sensors and harm their normal functionalities.…”
Section: The Threat Modelmentioning
confidence: 99%
“…This capability allows the attacker to outright replace the image with one that does not correspond to the world that is being observed and trivially enables serious compromises. However, in a lot of the systems we observed, adversaries do not have that kind of access and they need to either find ways to project fake objects (as in [Nassi et al, 2019]) or be more subtle and create physical adversarial objects (as in [Sharif et al, 2016, Eykholt et al, 2018, Brown et al, 2017). The relative increase in work factor for attackers is large, even though attacks are still possible.…”
Section: Robust Input Modalitiesmentioning
confidence: 99%
“…Attackers can find out-of-distribution inputs that do not fit in with the inputs that the model designer imagined. For example, Nassi et al [2019] demonstrates an attack that fools an autonomous vehicle into stopping or veering out of its lane by simply projecting false images of stop signs, pedestrians, and lane markers onto and around a road. Sufficiently realistic projections of light onto the road are enough to cause the vehicle to violate its security guarantees.…”
Section: Adaptive Defenses For Adaptive Adversariesmentioning
confidence: 99%
“…Recent academic work has produced many examples of disrupting camera-based intelligent systems using projections on surfaces in the environment [26], [31], [61], or the placement of printed patches [13]. Attacks inhibiting the use of camera footage are well known and repeatedly shown in different settings [35], [58].…”
Section: Introductionmentioning
confidence: 99%