2018
DOI: 10.1109/lcomm.2017.2787646
|View full text |Cite
|
Sign up to set email alerts
|

A Novel PAPR Reduction Scheme for OFDM System Based on Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
141
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 207 publications
(142 citation statements)
references
References 10 publications
0
141
0
1
Order By: Relevance
“…Training of the end-to-end system introduced in the previous section is performed w.r.t. to the loss function L defined in (6), as opposed to previous works which train autoencoders w.r.t. to the usual cross-entropy L defined in (5).…”
Section: Simulations Resultsmentioning
confidence: 99%
“…Training of the end-to-end system introduced in the previous section is performed w.r.t. to the loss function L defined in (6), as opposed to previous works which train autoencoders w.r.t. to the usual cross-entropy L defined in (5).…”
Section: Simulations Resultsmentioning
confidence: 99%
“…One naive approach for this issue includes the regularization of the cost function with a penalty parameter [18], [21], [22]. The corresponding unconstrained formulation can be expressed as…”
Section: B Conventional Training Methodsmentioning
confidence: 99%
“…The underlying idea is based on the primaldual method [31], which has been utilized for tackling traditional constrained optimization problems, to train the encoding and the decoding neural networks with multiple dimming constraints. We derive a single gradient decent optimization algorithm for the the DNN parameters and the dual variables, which is suitable for the existing DL optimization libraries to tackle the constrained training problem in (18). At the t-th iteration of the proposed training technique, the DNN parameters Θ D that minimizes (20) are computed by the steepest descent as…”
Section: Proposed Training Methodsmentioning
confidence: 99%
See 2 more Smart Citations