2021
DOI: 10.48550/arxiv.2106.14815
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Feature Importance Guided Attack: A Model Agnostic Adversarial Attack

Abstract: Machine learning models are susceptible to adversarial attacks which dramatically reduce their performance. Reliable defenses to these attacks are an unsolved challenge [11]. In this work we present a novel evasion attack: the 'Feature Importance Guided Attack' (FIGA) which generates adversarial evasion samples. FIGA is model agnostic, it assumes no prior knowledge of the defending model's learning algorithm, but does assume knowledge of the feature representation. FIGA leverages feature importance rankings; i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…The reason is simple: ours is more likely to occur, because 'phishers' with complete knowledge of the entire ML-PWD are extremely unlikely. Furthermore, extensive adversarial ML literature [21] has ably demonstrated that white-box attacks can break most systemsincluding ML-PWD (e.g., [8,36,59,81]).…”
Section: Security Analysismentioning
confidence: 99%
See 3 more Smart Citations
“…The reason is simple: ours is more likely to occur, because 'phishers' with complete knowledge of the entire ML-PWD are extremely unlikely. Furthermore, extensive adversarial ML literature [21] has ably demonstrated that white-box attacks can break most systemsincluding ML-PWD (e.g., [8,36,59,81]).…”
Section: Security Analysismentioning
confidence: 99%
“…Unfortunately, most publicly available datasets do not allow similar procedures. A viable alternative is composing ad-hoc dataset through public feeds as done, e.g., by [36] and [77] (the latter only for URL-based ML-PWD). All these papers, however, do not release the actual dataset, preventing reproducibility and hence introducing experimental bias.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations