Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings) 2023
DOI: 10.18653/v1/2023.findings-ijcnlp.35
|View full text |Cite
|
Sign up to set email alerts
|

What Learned Representations and Influence Functions Can Tell Us About Adversarial Examples

Shakila Mahjabin Tonni,
Mark Dras

Abstract: Adversarial examples, deliberately crafted using small perturbations to fool deep neural networks, were first studied in image processing and more recently in NLP. While approaches to detecting adversarial examples in NLP have largely relied on search over input perturbations, image processing has seen a range of techniques that aim to characterise adversarial subspaces over the learned representations.In this paper, we adapt two such approaches to NLP, one based on nearest neighbors and influence functions an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 35 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?