2022
DOI: 10.1109/lpt.2022.3162157
|View full text |Cite
|
Sign up to set email alerts
|

Reducing Training Time of Deep Learning Based Digital Backpropagation by Stacking

Abstract: A method for reducing the training time of a deep learning based digital backpropagation (DL-DBP) is presented. The method is based on dividing a link into smaller sections. A smaller section is then compensated by the DL-DBP algorithm and the same trained model is then reapplied to the subsequent sections. We show in a 32 GBd 16QAM 2400 km 5-channel wavelength division multiplexing transmission link experiment that the proposed stacked DL-DBPs provides a 0.41 dB gain with respect to linear compensation scheme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…An ANN's core benefit is the ability to construct a system model from the data at hand. It utilizes the backpropagation technique to extract texture characteristics in classifying images [4]. Meanwhile, the backpropagation approach is one of the supervised ANN methods examining the error for all neurons after processing a dataset.…”
Section: Introductionmentioning
confidence: 99%
“…An ANN's core benefit is the ability to construct a system model from the data at hand. It utilizes the backpropagation technique to extract texture characteristics in classifying images [4]. Meanwhile, the backpropagation approach is one of the supervised ANN methods examining the error for all neurons after processing a dataset.…”
Section: Introductionmentioning
confidence: 99%