2019
DOI: 10.1109/lwc.2018.2874264
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning-Based CSI Feedback Approach for Time-Varying Massive MIMO Channels

Abstract: Massive multiple-input multiple-output (MIMO) systems rely on channel state information (CSI) feedback to perform precoding and achieve performance gain in frequency division duplex (FDD) networks. However, the huge number of antennas poses a challenge to conventional CSI feedback reduction methods and leads to excessive feedback overhead. In this article, we develop a real-time CSI feedback architecture, called CsiNet-long short-term memory (LSTM), by extending a novel deep learning (DL)-based CSI sensing and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
258
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 352 publications
(258 citation statements)
references
References 12 publications
0
258
0
Order By: Relevance
“…Let NMSE Dt (k) represent the NMSE of the direct-transfer algorithm evaluated on the k-th target Update the parameter Ω using unbiased 1 st and 2 nd moment vectors: Ω ← Ω − ημ 1 /( √ν 1 + ε) 12 end 13 Testing stage 14 Initialize NMSE: NMSE Nt ← 0 15 for k = 1, · · · , K T do 16 Generate the testing dataset D Te (k) for T T (k) 17 Predict the downlink CSI base on D Te (k) and Ω using Eq. (8) 18 Calculate NMSE Nt (k) using Eq. (14) 19 NMSE Nt ← NMSE Nt + NMSE Nt (k)/K T 20 end environment that can be obtained by testing the network on the dataset D Te (k) using Eqs.…”
Section: Direct Transfer Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Let NMSE Dt (k) represent the NMSE of the direct-transfer algorithm evaluated on the k-th target Update the parameter Ω using unbiased 1 st and 2 nd moment vectors: Ω ← Ω − ημ 1 /( √ν 1 + ε) 12 end 13 Testing stage 14 Initialize NMSE: NMSE Nt ← 0 15 for k = 1, · · · , K T do 16 Generate the testing dataset D Te (k) for T T (k) 17 Predict the downlink CSI base on D Te (k) and Ω using Eq. (8) 18 Calculate NMSE Nt (k) using Eq. (14) 19 NMSE Nt ← NMSE Nt + NMSE Nt (k)/K T 20 end environment that can be obtained by testing the network on the dataset D Te (k) using Eqs.…”
Section: Direct Transfer Algorithmmentioning
confidence: 99%
“…Specifically, Ω S,k is initialized as the current network parameter Ω, and is then updated with G Tr steps of gradient descents. Since an overlarge gradient leads to the instability of gradient descents, we limit the values of the source-task-specific gradient ∇ Ω S,k Loss DTrSup(k) (Ω S,k ) into a certain range and obtain the truncated source-task-specific gradient υ S,k as [42] [υ S,k ] p =min Υ, ∇ Ω S,k Loss DTrSup(k) (Ω S,k ) p , p = 1, · · · , len(υ S,k ), (18) where Υ is the upper threshold of the gradient. Generally, an overlarge Υ may lead to large fluctuations of the loss function, while a too small Υ distorts the direction of the update, resulting in too early convergence.…”
Section: B Meta-training Stagementioning
confidence: 99%
“…The feature encoder extracts key features from the CSI matrix to obtain a lower dimensional representation, which is subsequently converted into a discrete-valued vector by applying scalar quantization. While previous works simply send the 32-bit scalar quantized version of the feature vector as the CSI feedback [11], [12], [14], we have observed that the autoencoder structure does not produce uniformly distributed feature values, and hence, can be further compressed. To further reduce the required feedback, we employ an entropy encoder; in particular, we use the context-adaptive binary arithmetic coding (CABAC) technique [26], which outputs a variable-length bit stream.…”
Section: Deepcmcmentioning
confidence: 80%
“…Several autoencoder-based CSI reduction techniques [11], [12], [14] focus on dimensionality reduction by direct application of the autoencoder architecture. These works are based on the assumption that reducing the dimension of the CSI matrix to be fed back to the BS would result in reduced feedback overhead.…”
mentioning
confidence: 99%
See 1 more Smart Citation