2019
DOI: 10.1561/0100000092
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Regression Codes

Abstract: Developing computationally-efficient codes that approach the Shannon-theoretic limits for communication and compression has long been one of the major goals of information and coding theory. There have been significant advances towards this goal in the last couple of decades, with the emergence of turbo codes, sparse-graph codes, and polar codes. These codes are designed primarily for discrete-alphabet channels and sources. For Gaussian channels and sources, where the alphabet is inherently continuous, Sparse … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(24 citation statements)
references
References 126 publications
(206 reference statements)
0
24
0
Order By: Relevance
“…In the next step, the value of every variable node is translated into an index representation of length m. This action is emblematic of CCS [3], and it produces a one-sparse block. The index vectors from the L variable nodes are then aggregated into vector m, which possesses a structure reminiscent of sparse regression codes [11], [15]. The transmitted codeword is obtained by multiplying m by a judiciously designed matrix AD, i.e., ADm is transmitted over the channel.…”
Section: Ii-a Encoding Proceduresmentioning
confidence: 99%
See 1 more Smart Citation
“…In the next step, the value of every variable node is translated into an index representation of length m. This action is emblematic of CCS [3], and it produces a one-sparse block. The index vectors from the L variable nodes are then aggregated into vector m, which possesses a structure reminiscent of sparse regression codes [11], [15]. The transmitted codeword is obtained by multiplying m by a judiciously designed matrix AD, i.e., ADm is transmitted over the channel.…”
Section: Ii-a Encoding Proceduresmentioning
confidence: 99%
“…This framework is rooted in a divide-and-conquer approach where support recovery is broken down into several sub-problems, each of a size amenable to the application of standard CS solvers, such as non-negative least squares (NNLS) or approximate message passing (AMP). This reduction is enabled through an architecture that contains a concatenated code structure reminiscent of sparse regression codes [11] and for-all sparse recovery [12]. Once fragments are obtained by the CS solvers, they are stitched together using the outer tree code, yielding the desired support of the sparse vector.…”
Section: Introductionmentioning
confidence: 99%
“…Every output symbol is turned into an index vector, which contains zeros everywhere except for a single location where it features a one. These index vectors are concatenated into a SPARC-like vector [23]- [25], which is subsequently multiplied by a sensing matrix. Such a construction is somewhat intricate, and it can hardly be explained in detail within a short article.…”
Section: System Model and Encoding Processmentioning
confidence: 99%
“…The conceptual starting point for our discussion is the CCS scheme of Amalladinne et al [6], [13]. This URA approach combines an LDPC outer code and a CS-style inner code reminiscent of a sparse regression code (SPARC) [14], [15]. More formally, an information message w is encoded using a non-binary LDPC code, yielding codeword…”
Section: Proposed Approachmentioning
confidence: 99%