2018
DOI: 10.1007/s00521-018-3392-6
|View full text |Cite
|
Sign up to set email alerts
|

An efficient soft demapper for APSK signals using extreme learning machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…The obtained in‐phase and quadrature components of the complex symbols at the output of the demodulator are then subjected to a maximum‐likelihood (ML) detector. After the detection process, the output of the detector is passed through a demapper, which may involve several independent or iterative decoding stages depending on the coding algorithms used in the mapper [5]. Finally, decoded symbols are passed through a multiplexer to convert from parallel to serial and generate the estimate of the transmitted message sequence.…”
Section: Brief Review Of Apskmentioning
confidence: 99%
See 1 more Smart Citation
“…The obtained in‐phase and quadrature components of the complex symbols at the output of the demodulator are then subjected to a maximum‐likelihood (ML) detector. After the detection process, the output of the detector is passed through a demapper, which may involve several independent or iterative decoding stages depending on the coding algorithms used in the mapper [5]. Finally, decoded symbols are passed through a multiplexer to convert from parallel to serial and generate the estimate of the transmitted message sequence.…”
Section: Brief Review Of Apskmentioning
confidence: 99%
“…Consequently, trying to keep the power consumption at a certain level will, unfortunately, end up in an increased BER. Besides, the other drawback is, of course, the cost of implementation complexity of both a high‐order transmitter and a receiver, as the complexity gradually increased with the size of the constellation [5].…”
Section: Introductionmentioning
confidence: 99%
“…In order to minimize the error that will occur during training in gradient-based approaches, the process of changing the given weights and biases continue until the most appropriate parameters are obtained. In the ELM method, the input weights and biases are given randomly, and the output weights are calculated accordingly [10]. Equation 1 can be rewritten in a more compact form as follows =…”
Section: B Extreme Learning Machinementioning
confidence: 99%
“…This situation allows RNN to be easy and fast in data processing [32] and ensures a fine generalization achievement for a single FFNN [33]. Compared to the other known gradient-based learning algorithms, RNN has several advantages like the potential of reaching the minimum training error, operating with non-differentiable activation functions, and employing a single hidden layer [34]. Moreover, the number of observations is more than the number of neurons in the RNN hidden layer [35].…”
Section: Randomized Neural Networkmentioning
confidence: 99%