2024
DOI: 10.6339/24-jds1156
|View full text |Cite
|
Sign up to set email alerts
|

Magnitude Pruning of Large Pretrained Transformer Models with a Mixture Gaussian Prior

Mingxuan Zhang,
Yan Sun,
Faming Liang

Abstract: Large pretrained transformer models have revolutionized modern AI applications with their state-of-the-art performance in natural language processing (NLP). However, their substantial parameter count poses challenges for real-world deployment. To address this, researchers often reduce model size by pruning parameters based on their magnitude or sensitivity. Previous research has demonstrated the limitations of magnitude pruning, especially in the context of transfer learning for modern NLP tasks. In this paper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 9 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?