2022
DOI: 10.48550/arxiv.2203.04570
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

Abstract: Vision transformer (ViT) has achieved competitive accuracy on a variety of computer vision applications, but its computational cost impedes the deployment on resourcelimited mobile devices. We explore the sparsity in ViT and observe that informative patches and heads are sufficient for accurate image recognition. In this paper, we propose a cascade pruning framework named CP-ViT by predicting sparsity in ViT models progressively and dynamically to reduce computational redundancy while minimizing the accuracy l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 22 publications
0
0
0
Order By: Relevance