2021
DOI: 10.3390/electronics10172176
|View full text |Cite
|
Sign up to set email alerts
|

Super-Resolution Model Quantized in Multi-Precision

Abstract: Deep learning has achieved outstanding results in various tasks in machine learning under the background of rapid increase in equipment’s computing capacity. However, while achieving higher performance and effects, model size is larger, training and inference time longer, the memory and storage occupancy increasing, the computing efficiency shrinking, and the energy consumption augmenting. Consequently, it’s difficult to let these models run on edge devices such as micro and mobile devices. Model compression t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…The representative quantization methods include mixed precision [29] and quantization-aware training (QAT) [30]. Mixed precision training can improve the performance by searching the optimized bits per layer.…”
Section: B Lightweight Super-resolution Methods For Mobilementioning
confidence: 99%
“…The representative quantization methods include mixed precision [29] and quantization-aware training (QAT) [30]. Mixed precision training can improve the performance by searching the optimized bits per layer.…”
Section: B Lightweight Super-resolution Methods For Mobilementioning
confidence: 99%