2017 Information Theory and Applications Workshop (ITA) 2017
DOI: 10.1109/ita.2017.8023481
|View full text |Cite
|
Sign up to set email alerts
|

Sequential differential optimization of incremental redundancy transmission lengths: An example with tail-biting convolutional codes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…We apply the above analysis to the case of tail-biting convolutional codes over additive white Gaussian noise (AWGN) channels. As shown in [15] for binary inputs with a signal-tonoise ratio (SNR) of 2 dB, the Gaussian distribution closelyapproximates the ACK probability as follows:…”
Section: Case Study: Convolutional Codesmentioning
confidence: 97%
See 2 more Smart Citations
“…We apply the above analysis to the case of tail-biting convolutional codes over additive white Gaussian noise (AWGN) channels. As shown in [15] for binary inputs with a signal-tonoise ratio (SNR) of 2 dB, the Gaussian distribution closelyapproximates the ACK probability as follows:…”
Section: Case Study: Convolutional Codesmentioning
confidence: 97%
“…Optimizing the HARQ approach requires determination of the length of the initial transmission and each subsequent transmission of incremental redundancy. Sequential differential optimization (SDO) [13]- [15] identifies a sequence of HARQ transmission lengths that optimizes throughput. For a specified maximum number of feedback transmissions and a maximum probability that the decoder fails to produce a positive acknowledgement (ACK) even when all possible incremental redundancy has been received, SDO finds the transmission lengths that minimize average blocklength.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Later, variations of SDO were developed to improve the Gaussian model accuracy, cf. [11], [12], and SDO was applied to systems that employ incremental redundancy and hybrid automatic repeat request [13] and to the binary erasure channel [14], [15]. However, in this paper, we will show that the Gaussian model is still imprecise for small values of n. Additionally, the existing SDO procedure did not consider the gap constraint of decoding times and is only suitable for VLSF codes with sparse decoding times.…”
Section: Introductionmentioning
confidence: 94%
“…Later, variations of SDO were developed to improve the Gaussian model accuracy [13], [14]. The SDO algorithm is used to optimize systems that employ incremental redundancy and hybrid automatic repeat request (ARQ) [15], and to code for the binary erasure channel [16], [17]. However, in this paper, we show that the Gaussian model is still imprecise for small values of n. Additionally, the existing SDO procedure fails to consider the inherent gap constraint that two decoding times must be separated by at least one.…”
Section: Introductionmentioning
confidence: 99%