2021
DOI: 10.1109/jsac.2021.3078489
|View full text |Cite
|
Sign up to set email alerts
|

Joint Source-Channel Coding Over Additive Noise Analog Channels Using Mixture of Variational Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 36 publications
(16 citation statements)
references
References 32 publications
0
16
0
Order By: Relevance
“…In the second row we use a machine learning based Joint Source-Channel Coding scheme [43] followed by a xvi classifier. Even though this system is better than the JPEG2000 based system, its performance is poor compared to the functional compression schemes showcased from the fourth row onwards.…”
Section: B Simulation Results For Orthogonal Awgn Channels 1) Varying...mentioning
confidence: 99%
“…In the second row we use a machine learning based Joint Source-Channel Coding scheme [43] followed by a xvi classifier. Even though this system is better than the JPEG2000 based system, its performance is poor compared to the functional compression schemes showcased from the fourth row onwards.…”
Section: B Simulation Results For Orthogonal Awgn Channels 1) Varying...mentioning
confidence: 99%
“…To summarize, the cross entropy loss of our approach and of the AE usually used in literature is well-motivated as shown in (18) since its amortized version is equal to the negative MILBO. This is in contrast to the Variational AE (VAE) being used in [24] that maximizes the ELBO on the evidence p θ (y). Further, we notice that no explicit variational regularization term is present in the MILBO compared to the ELBO.…”
Section: B Learning Of a Semantic Communication System Via Infomax Pr...mentioning
confidence: 99%
“…However, note that our choice (25) forces y to contain much information about s which is not necessarily true when maximizing βI θ (x; y) while minimizing I θ (s; x). Using the lower bound in (24), we can further avoid estimating the mutual information I θ (s; y)…”
Section: Information Bottleneck Viewmentioning
confidence: 99%
“…Other works have also used DNNs focusing on specific components such as decoder design [11,12,13,14,15,16,17] and constellation shaping for modulation [18]. Several other approaches to canonical problems have been proposed for feedback channels [19], quantized channel observations [20], joint source-channel coding [21], and the wiretap channel [22]. More recently, theoretical studies on end-to-end design have been given in [23].…”
Section: Introductionmentioning
confidence: 99%