2022
DOI: 10.1109/tie.2021.3070517
|View full text |Cite
|
Sign up to set email alerts
|

Quantization-Aware Pruning Criterion for Industrial Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 13 publications
0
3
0
1
Order By: Relevance
“…These models aim to strike a balance between accuracy and resource efficiency, enabling effective deployment in industrial environments. Currently, several compression methods for large neural networks are being explored, including parameter compression [163], pruning [164], and distillation [165]. These techniques aim to reduce the model size and computational requirements without significantly sacrificing predictive performance.…”
Section: A Lightweight Model For Industrial Edge Intelligencementioning
confidence: 99%
“…These models aim to strike a balance between accuracy and resource efficiency, enabling effective deployment in industrial environments. Currently, several compression methods for large neural networks are being explored, including parameter compression [163], pruning [164], and distillation [165]. These techniques aim to reduce the model size and computational requirements without significantly sacrificing predictive performance.…”
Section: A Lightweight Model For Industrial Edge Intelligencementioning
confidence: 99%
“…⑤.We process the trained model with optimization, quantization, and pruning, to speed up its operation on intelligent modules [12] . After these optimizations, our algorithm has improved in both recognition speed and accuracy, with better performance.…”
Section: Remote Sensing Image Object Detection Systemmentioning
confidence: 99%
“…For example, quantization [14], changing the model parameters from floating-point numbers to low-bit width numbers; Pruning [15], removing the least important parameters that do not affect the decision-making process of the model; Knowledge distillation [16], transferring the knowledge to the student model arXiv:2301.05748v1 [eess.SP] 13 Jan 2023 form a teacher model. Those model compression approaches can be considered individually as well as jointly [17], [18] and have shown their validity in enormous use scenarios [19]- [21]. Besides that, novel hardware architectures were also developed to bring intelligence to the edge.…”
Section: Introductionmentioning
confidence: 99%