2022
DOI: 10.1016/j.future.2022.04.031
|View full text |Cite
|
Sign up to set email alerts
|

Inference-aware convolutional neural network pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
1

Year Published

2022
2022
2025
2025

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(30 citation statements)
references
References 25 publications
0
29
1
Order By: Relevance
“…Nonetheless, the proposed model should function effortlessly on devices with limited hardware resources and on applications with strict latency requirements without any deterioration in recognition accuracy. To this end, model compression technology is essential [ 51 , 52 ].…”
Section: Discussionmentioning
confidence: 99%
“…Nonetheless, the proposed model should function effortlessly on devices with limited hardware resources and on applications with strict latency requirements without any deterioration in recognition accuracy. To this end, model compression technology is essential [ 51 , 52 ].…”
Section: Discussionmentioning
confidence: 99%
“…To realize widespread use in medical practice and personal use, simpler models that can run even on small-scale electronic devices such as smartphones and wearable devices must be developed. For this problem, model compression techniques [ 31 , 32 ] will be an effective approach.…”
Section: Discussionmentioning
confidence: 99%
“…To realize a wide range of uses in medical practice, it is necessary to construct a simpler model that can even work on small-scale electronic devices such as smartphones and wearable terminals. To address this issue, model-compression techniques [ 34 , 35 ] would be an effective approach.…”
Section: Discussionmentioning
confidence: 99%