Computer Vision for Assistive Healthcare 2018
DOI: 10.1016/b978-0-12-813445-0.00001-0
|View full text |Cite
|
Sign up to set email alerts
|

Computer Vision for Sight

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…(1) Criterion-based pruning resorts to preserving the most important weights/attentions by employing pre-defined criteria, e.g., the L1/L2 norm [23], or activation values [6]. (2) Training-based pruning retrains models with hand-crafted sparse regularizations [36] or resource constraints [31,32].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…(1) Criterion-based pruning resorts to preserving the most important weights/attentions by employing pre-defined criteria, e.g., the L1/L2 norm [23], or activation values [6]. (2) Training-based pruning retrains models with hand-crafted sparse regularizations [36] or resource constraints [31,32].…”
Section: Introductionmentioning
confidence: 99%
“…However, the proposed methods only address the issue of accelerating self-attention mechanisms, thus are not applicable to other types of layers. Recently, Zhu et al [36] introduce the method VTP with a sparsity regularization to identify and remove unimportant patches and heads from the vision transformers. However, VTP needs to try the thresholds manually for all layers.…”
Section: Introductionmentioning
confidence: 99%