2024
DOI: 10.7717/peerj-cs.2012
|View full text |Cite
|
Sign up to set email alerts
|

GAT TransPruning: progressive channel pruning strategy combining graph attention network and transformer

Yu-Chen Lin,
Chia-Hung Wang,
Yu-Cheng Lin

Abstract: Recently, large-scale artificial intelligence models with billions of parameters have achieved good results in experiments, but their practical deployment on edge computing platforms is often subject to many constraints because of their resource requirements. These models require powerful computing platforms with a high memory capacity to store and process the numerous parameters and activations, which makes it challenging to deploy these large-scale models directly. Therefore, model compression techniques are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
references
References 36 publications
0
0
0
Order By: Relevance