2018
DOI: 10.1007/978-3-319-96145-3_1
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Adversarial Deep Learning

Abstract: Abstract.Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks, are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences.However, existing approaches to genera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
58
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 52 publications
(58 citation statements)
references
References 30 publications
0
58
0
Order By: Relevance
“…To deal with the lack of specification for perception components, VERIFAI analyzes them in the context of a closedloop system using a system-level specification. Moreover, to scale to complex highdimensional feature spaces, VERIFAI operates on an abstract feature space (or semantic feature space) [8] that describes semantic aspects of the environment being perceived, not the raw features such as pixels. • Learning: VERIFAI aims to not only analyze the behavior of ML components but also use formal methods for their (re-)design.…”
Section: Introductionmentioning
confidence: 99%
“…To deal with the lack of specification for perception components, VERIFAI analyzes them in the context of a closedloop system using a system-level specification. Moreover, to scale to complex highdimensional feature spaces, VERIFAI operates on an abstract feature space (or semantic feature space) [8] that describes semantic aspects of the environment being perceived, not the raw features such as pixels. • Learning: VERIFAI aims to not only analyze the behavior of ML components but also use formal methods for their (re-)design.…”
Section: Introductionmentioning
confidence: 99%
“…All of the aforementioned methods deal with the verification and testing of NNs at the component level; however, our work targets the NN testing problem at the system level. The line of research that is the closest in spirit to our work is [7,6,8]. The procedure described in [7,6,8] analyzes the performance of the perception system using static images to identify candidate counterexamples, which they then check using simulations of the closed-loop behaviors to determine whether the system exhibits unsafe behaviors.…”
Section: Related Workmentioning
confidence: 99%
“…Studies show that adding stickers to a stop sign in carefully positioned ways can result in the classification model to misidentify the stop sign to be a speed-limit sign. However, existing investigations on adversarial examples still focus on classification errors associated with static images and are conducted in limited experimental environments [3], [5], [11]. Research considering the learning model in a dynamic system setting, like on autonomous vehicles in the real world is sparse [12].…”
Section: B Attacks On Deep Learning For Perception and Controlmentioning
confidence: 99%
“…Despite the success of deep learning in enabling greater autonomy, a number of parallel efforts also exhibited concerning fragility of these approaches to small adversarial perturbations of inputs, such as images [4], [5]. Moreover, Fig.…”
Section: Introductionmentioning
confidence: 99%