2021
DOI: 10.48550/arxiv.2105.08619
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Robustness of Domain Constraints

Abstract: Machine learning is vulnerable to adversarial examples-inputs designed to cause models to perform poorly. However, it is unclear if adversarial examples represent realistic inputs in the modeled domains. Diverse domains such as networks and phishing have domain constraints-complex relationships between features that an adversary must satisfy for an attack to be realized (in addition to any adversary-specific goals). In this paper, we explore how domain constraints limit adversarial capabilities and how adversa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…Most of the constrained attacks in feature space are an upgrade of traditional computer vision attacks like JSMA [16], C&W [17] or FGSM [18]. The majority of these attacks are evaluated in Network Intrusion Detection Systems (NIDS) [6], [10], [13], [19], [20] and few other domains: Cyber-Physical Systems [7], Twitter bot detection and website fingerprinting [21], and credit scoring [3], [22], [23]. Every attack handles different types of constraints, resulting in loosely realistic to totally realistic adversarial examples.…”
Section: A Constrained Adversarial Attacks In Feature Spacementioning
confidence: 99%
See 1 more Smart Citation
“…Most of the constrained attacks in feature space are an upgrade of traditional computer vision attacks like JSMA [16], C&W [17] or FGSM [18]. The majority of these attacks are evaluated in Network Intrusion Detection Systems (NIDS) [6], [10], [13], [19], [20] and few other domains: Cyber-Physical Systems [7], Twitter bot detection and website fingerprinting [21], and credit scoring [3], [22], [23]. Every attack handles different types of constraints, resulting in loosely realistic to totally realistic adversarial examples.…”
Section: A Constrained Adversarial Attacks In Feature Spacementioning
confidence: 99%
“…Sheatsley et al [6] integrate constraint resolution into JSMA attack to limit feature values in the range dictated by their primary feature. Later on, for the same case study, Sheatsley et al [20], learned boolean constraints with Valiant algorithm and introduced Constrained Saliency Projection (CSP), an improved iterative version of their previous attack. The bottleneck of this attack is the process of learning the constraints.…”
Section: A Constrained Adversarial Attacks In Feature Spacementioning
confidence: 99%
“…Initially introduced in vision [21], [22], evasion has now been demonstrated for a wide variety of domains including audio [30]- [32] and text [33]- [36]. Attacks can also be physically realized [31], [37], [38] and constrained such that samples generated for evasion preserve input semantics [32], [34], [39]- [42].…”
Section: Risksmentioning
confidence: 99%
“…Our observation that a proactive defender can observe each step of the pipeline is similar in idea to statistical testing at each of the layers of a computer vision model [45,55,59]. Sheatsley et al [66] make similar observations, and propose a data-driven approach to learn constraints from data enabling robustness. Concurrent work by Hussain et al [38] consider ASR systems.…”
Section: Related Workmentioning
confidence: 99%