2020
DOI: 10.48550/arxiv.2007.07205
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Security and Machine Learning in the Real World

Abstract: Machine learning (ML) models deployed in many safety-and business-critical systems are vulnerable to exploitation through adversarial examples. A large body of academic research has thoroughly explored the causes of these blind spots, developed sophisticated algorithms for finding them, and proposed a few promising defenses. A vast majority of these works, however, study standalone neural network models. In this work, we build on our experience evaluating the security of a machine learning software product dep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…As such, the computed perturbations are confined to the digital domain, and are impossible to be deployed to a physical target. Recently, there has been a surging research interest on the performance of these attacks when deployed on real-world systems [11]. Along these lines, in our work, we develop perturbations that are designed to be deployable.…”
Section: Patchmentioning
confidence: 99%
See 1 more Smart Citation
“…As such, the computed perturbations are confined to the digital domain, and are impossible to be deployed to a physical target. Recently, there has been a surging research interest on the performance of these attacks when deployed on real-world systems [11]. Along these lines, in our work, we develop perturbations that are designed to be deployable.…”
Section: Patchmentioning
confidence: 99%
“…These techniques encompass robust training procedures [23], perturbation detection and removal [14], and reconstructions through deep image priors [9]. To defend from Adversarial Scratches, following [11], we consider defenses that rely on input filtering as these are more scalable than defenses that try to make the model itself more robust. In particular, we adopt:…”
Section: Defensesmentioning
confidence: 99%
“…In general, AML research has been criticized for the limited practical relevance of its threat models [28,31]. There is also limited knowledge about which threats are relevant in practice.…”
Section: Adversarial Machine Learningmentioning
confidence: 99%
“…Second, in deployed systems, a ML model typically interacts with other components, including other models. This interaction can be of extreme complexity, which might introduce additional challenges for adversaries [34]. For instance, in the Gboard app [48], as a user starts typing a search query, a baseline model determines possible search suggestions.…”
Section: Fallacies In Evaluation Setupsmentioning
confidence: 99%
“…Description. An attack becomes ineffective if it requires the adversary to make a disproportional large effort to overcome a small defense mechanism [34]. Proposed attacks need to be evaluated in this respect with stateof-the-art defenses.…”
Section: Fallacies In Evaluation Setupsmentioning
confidence: 99%