2022
DOI: 10.3390/electronics11071138
|View full text |Cite
|
Sign up to set email alerts
|

Efficient FPGA Implementation of an ANN-Based Demapper Using Cross-Layer Analysis

Abstract: In the field of communication, autoencoder (AE) refers to a system that replaces parts of the traditional transmitter and receiver with artificial neural networks (ANNs). To meet the system performance requirements, it is necessary for the AE to adapt to the changing wireless-channel conditions at runtime. Thus, online fine-tuning in the form of ANN-retraining is of great importance. Many algorithms on the ANN layer are developed to improve the AE’s performance at the communication layer. Yet, the link of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 42 publications
0
1
0
Order By: Relevance
“…These solutions are very popular at present for the implementation of various projects. This fact is confirmed by the analysis of the literature [22][23][24][25][26][27][28].…”
supporting
confidence: 67%
“…These solutions are very popular at present for the implementation of various projects. This fact is confirmed by the analysis of the literature [22][23][24][25][26][27][28].…”
supporting
confidence: 67%
“…FPGA training accelerators targeting communication systems were presented by Ney et al in 2022 [6] and 2023 [10]. In [6], a trainable NN-based demapper was implemented using a cross-layer-design methodology, and in [10] a novel unsupervised equalizer was presented featuring a fully-pipelined hardware architecture to balance the lifetime of feature maps. However, none of the works is able to satisfy high throughput requirements surpassing 10 Gbit•s −1 .…”
Section: A Nn Training On Fpgamentioning
confidence: 99%
“…In summary, some works focus on accelerating the throughput and processing time of NN training on FPGA [13]- [15]. Other works address low-power and low-energy processing for accelerating the training at the edge [6], [10], [16], [17]. However, there is a clear lack of research focusing on highthroughput inference combined with resource-efficient training on FPGA which is mandatory for our application to satisfy the high-throughput constraints while providing the required adaptability.…”
Section: A Nn Training On Fpgamentioning
confidence: 99%
See 2 more Smart Citations