2022
DOI: 10.21203/rs.3.rs-1236203/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Experimental Implementation of a Neural Network Optical Channel Equalizer in Restricted Hardware Using Pruning and Quantization

Abstract: The deployment of artificial neural networks-based optical channel equalizers on edge-computing devices is critically important for the next generation of optical communication systems. However, this is a highly challenging problem, mainly due to the computational complexity of the artificial neural networks (NNs) required for the efficient equalization of nonlinear optical channels with large memory. To implement the NN-based optical channel equalizer in hardware, a substantial complexity reduction is needed,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…As is shown, the sensitivity of the performance of these equalisers decreases as the size of the crowd increases. On the other hand, CC is defined as the number of complex-valued multiplications [3]:…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As is shown, the sensitivity of the performance of these equalisers decreases as the size of the crowd increases. On the other hand, CC is defined as the number of complex-valued multiplications [3]:…”
Section: Resultsmentioning
confidence: 99%
“…Although the performance of some of these digital signal processing (DSP) techniques is impressive in restoring the original state of the received signal, these equalisers often require complex architecture with complicated elements such as long shortterm memory nodes [1,2]. This increases the computational complexity (CC) of such solutions beyond what is acceptable for real-time implementation, especially when compared to alternatives such as digital backpropagation, and at odds with the promise of low footprint solutions [3]. Moreover, their architecture is fundamentally different from the one that digital computers have, making this hardware not suitable for them.…”
Section: Introductionmentioning
confidence: 99%
“…Usually, we first apply knowledge distillation to improve the performance, then apply network pruning [132], [135] or quantization. Or first, apply network pruning, then quantization [88], [90], [95], [96], [100]. In summary, knowledge distillation should be applied first to ensure performance; then network pruning for model compression; quantization should be placed in the last step because the training of the quantized network is challenging.…”
Section: F Relationship Among Three Model Compression Methodsmentioning
confidence: 99%
“…According to Ref. [39], [40], the compression can often be accomplished with little loss of accuracy and, in some situations, the accuracy may even rise [41]. Three methods of network compression are discussed below: pruning, weight clustering, and quantization.…”
Section: Xq Yi Yqmentioning
confidence: 99%
“…In this study, to validate the robustness of such a NN equalizer on the quantization, they reduced the bit precision of weights to up to 2 bits, observing mostly only minor performance degradation. Finally, in [41], an MLP equalizer was used to mitigate the impairments in a 30 GBd 1000 km system. In this case, the PTQ strategy together with the traditional uniform 8 bits quantization was demonstrated using low-performing hardware (Raspberry Pi and Jetson nano).…”
Section: Weightsmentioning
confidence: 99%