2022
DOI: 10.1109/twc.2022.3182216
|View full text |Cite
|
Sign up to set email alerts
|

Changeable Rate and Novel Quantization for CSI Feedback Based on Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…[27] equips the µ-law quantizer into its system and improves the feedback performance through end-to-end training. Focusing on the non-differentiable problem brought by the quantization module, [28] designs a differentiable function to approximate the gradients.…”
Section: Related Workmentioning
confidence: 99%
“…[27] equips the µ-law quantizer into its system and improves the feedback performance through end-to-end training. Focusing on the non-differentiable problem brought by the quantization module, [28] designs a differentiable function to approximate the gradients.…”
Section: Related Workmentioning
confidence: 99%
“…The results of the retraining decoder training mode are always different from those of an end-to-end fashion, with the latter generally performs better [15]. Hence, for the uniform quantization and µ-law quantization, we adopt an end-to-end fashion and set the gradient of quantizers to constant one to pass the back propagation, similar to the approach used in [9], [15], [16]. The training strategies and random seed are the same for all these quantization methods with 1000 epochs and cosine annealing learning rate.…”
Section: Quantization Module Evaluationmentioning
confidence: 99%
“…The residual learning [9], clustering schemes [15], and approximate gradient functions [16] are introduced to generate bitstream and reduce quantization distortion. DL-based networks can be generally divided into CNNs and attention frameworks.…”
Section: Introductionmentioning
confidence: 99%