2014
DOI: 10.1109/tit.2014.2314676
|View full text |Cite
|
Sign up to set email alerts
|

Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

Abstract: We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
4

Relationship

3
5

Authors

Journals

citations
Cited by 29 publications
(27 citation statements)
references
References 40 publications
0
27
0
Order By: Relevance
“…The dictionary design is heavily inspired by SPARC, presented in [14]. The structure of the dictionary in SPARC can be used in the design of computationally efficient encoders [20]. An additional feature of SPARC is, that it has a low memory requirement, since only one section of the dictionary needs to be stored in the memory for every iteration.…”
Section: B Fixed Rate Quantizers For Ppcmentioning
confidence: 99%
“…The dictionary design is heavily inspired by SPARC, presented in [14]. The structure of the dictionary in SPARC can be used in the design of computationally efficient encoders [20]. An additional feature of SPARC is, that it has a low memory requirement, since only one section of the dictionary needs to be stored in the memory for every iteration.…”
Section: B Fixed Rate Quantizers For Ppcmentioning
confidence: 99%
“…We consider a dataset composed of M = 1000 binary sequences of length n = 512, with p = q = 0.5. As the fixed-length lossy compression algorithm, we use a binary-Hamming version of the successive refinement compression scheme [13].…”
Section: A Binary Symmetric Sources and Hamming Distortionmentioning
confidence: 99%
“…It was shown in [11] that any ergodic source can be compressed to the Gaussian distortion-rate function…”
Section: Point-to-point Source Codingmentioning
confidence: 99%
“…SPARCs for lossy compression were first considered in [9]. In [10], [11], we showed that SPARCs also attain the optimal rate-distortion function of Gaussian sources with feasible algorithms. Further, we showed in [12] that the source and channel coding modules can be combined to implement superposition and binning, which are key ingredients of several multi-terminal source and channel coding problems.…”
Section: Introductionmentioning
confidence: 99%