Large Language Models in Cybersecurity 2024
DOI: 10.1007/978-3-031-54827-7_20
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Evasion on LLMs

Rachid Guerraoui,
Rafael Pinot

Abstract: While Machine Learning (ML) applications have shown impressive achievements in tasks such as computer vision, NLP, and control problems, such achievements were possible, first and foremost, in the best-case-scenario setting. Unfortunately, settings where ML applications fail unexpectedly, abound, and malicious ML application users or data contributors can trigger such failures. This problem became known as adversarial example robustness. While this field is in rapid development, some fundamental results have b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 8 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?