2019
DOI: 10.48550/arxiv.1911.13055
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Trainable Communication Systems: Concepts and Prototype

Abstract: We consider a trainable point-to-point communication system, where both transmitter and receiver are implemented as neural networks (NNs), and demonstrate that training on the bit-wise mutual information (BMI) allows seamless integration with practical bit-metric decoding (BMD) receivers, as well as joint optimization of constellation shaping and labeling. Moreover, we present a fully differentiable neural iterative demapping and decoding (IDD) structure which achieves significant gains on additive white Gauss… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…As previously mentioned, the idea of this autoencoder setup is to maximize the BMI at the receiver's output, which is shown in [12] to be closely related to minimizing the total binary CE, and leads to the following loss definition 1…”
Section: A Training Approachesmentioning
confidence: 99%
See 4 more Smart Citations
“…As previously mentioned, the idea of this autoencoder setup is to maximize the BMI at the receiver's output, which is shown in [12] to be closely related to minimizing the total binary CE, and leads to the following loss definition 1…”
Section: A Training Approachesmentioning
confidence: 99%
“…Contrary to [9], [10], the wireless channel can become an attractive subject of investigation once multipath and, thus, frequency-selectivity becomes part of the transmission. We utilize the orthogonal frequency division multiplex (OFDM)-autoencoder structure from [11] and optimize the autoencoder for bit-wise information transmission as introduced in [12]. Further, we utilize Wasserstein generative adversarial networks (WGANs) [13] for improved convergence and training stability.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations