Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering 2024
DOI: 10.1145/3691620.3695271
|View full text |Cite
|
Sign up to set email alerts
|

Models Are Codes: Towards Measuring Malicious Code Poisoning Attacks on Pre-trained Model Hubs

Jian Zhao,
Shenao Wang,
Yanjie Zhao
et al.

Abstract: The proliferation of pre-trained models (PTMs) and datasets has led to the emergence of centralized model hubs like Hugging Face, which facilitate collaborative development and reuse. However, recent security reports have uncovered vulnerabilities and instances of malicious attacks within these platforms, highlighting growing security concerns. This paper presents the first systematic study of malicious code poisoning attacks on pre-trained model hubs, focusing on the Hugging Face platform. We conduct a compre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 9 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?