2022
DOI: 10.48550/arxiv.2204.13465
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Attention Based Neural Networks for Wireless Channel Estimation

Abstract: In this paper, we deploy the self-attention mechanism to achieve improved channel estimation for orthogonal frequency-division multiplexing waveforms in the downlink. Specifically, we propose a new hybrid encoder-decoder structure (called HA02) for the first time which exploits the attention mechanism to focus on the most important input information.In particular, we implement a transformer encoder block as the encoder to achieve the sparsity in the input features and a residual neural network as the decoder r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…Attention mechanism has been employed to help the DLbased communication system, such as channel state information (CSI) compression [20], channel compression [21] and CE [22] [16]. It can enhance the estimation accuracy for channels with highly separate distributions in [16].…”
Section: B Attention Mechanismmentioning
confidence: 99%
“…Attention mechanism has been employed to help the DLbased communication system, such as channel state information (CSI) compression [20], channel compression [21] and CE [22] [16]. It can enhance the estimation accuracy for channels with highly separate distributions in [16].…”
Section: B Attention Mechanismmentioning
confidence: 99%
“…This will ensure that the subsequent neural network can focus on the critical features to yield the best channel estimate. Compared to HA02 [22], the neural architecture of the encoder is modified for the Channelformer to provide both reduced complexity and improved performance.…”
Section: B Main Contributions and Outlinementioning
confidence: 99%
“…Compared with ReEsNet, Interpolation-ResNet with only 9,442 parameters (called InterpolateNet in this paper) [10] achieves a slightly improved performance and 82% reduced parameters. However, the generalization capability of InterpolateNet and ReEsNet trained [10] is quite limited, which motivates us to research with the attention mechanism to propose HA02 [22]. As the other neural network solutions are much less complex than ChannelNet, we only consider the stateof-the-art networks when presenting simulation results.…”
Section: Channelnet Reesnet Tr and Interpolation-resnetmentioning
confidence: 99%
“…A transformer network can be designed to process variable lengths of sequences as inputs and outputs, which is beneficial for designing scalable channel estimators. However, several transformer-based estimators, including in Li and Peng [20] and Luan and Thompson [22], might not process variable size of resource grids through fully connected layers and/or upscaling modules, whose input and output dimensions should be predetermined to a specific configuration. Moreover, as shown in the later part of this paper, the baseline transformer-based estimator tends to learn a specific scenario in the training dataset rather than general relationships, like other baseline DL-based estimators.…”
Section: Related Workmentioning
confidence: 99%
“…Self-attention mechanism and transformer network [16] have great successes in various NLP applications, for example, Devlin and others [17], by its powerful feature extractions, and also for various image processing applications [18]. Several works [19][20][21][22] presented channel estimations based on transformer network or attention mechanisms. A transformer network can be designed to process variable lengths of sequences as inputs and outputs, which is beneficial for designing scalable channel estimators.…”
Section: Related Workmentioning
confidence: 99%