2008
DOI: 10.1109/icassp.2008.4518494
|View full text |Cite
|
Sign up to set email alerts
|

Explicit measurements with almost optimal thresholds for compressed sensing

Abstract: We consider the deterministic construction of a measurement matrix and a recovery method for signals that are block sparse. A signal that has dimension N = nd, which consists of n blocks of size d, is called (s, d)-block sparse if only s blocks out of n are nonzero. We construct an explicit linear mapping Φ that maps the (s, d)-block sparse signal to a measurement vector of dimension M , where− o(1). We show that if the (s, d)-block sparse signal is chosen uniformly at random then the signal can almost surely … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
51
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 32 publications
(51 citation statements)
references
References 12 publications
0
51
0
Order By: Relevance
“…For instance, it is known that if x is exactly k-sparse then based on Reed-Solomon codes [11] one can efficiently reconstruct x with O(k) noiseless measurements (e.g. [12]) via algorithms with decoding time-complexity O(n log(n)), or via codes such as in [13], [14] with O(k) noiseless measurements with decoding time-complexity O(n). In the regime where k = θ(n), [15] show that O(k) = O(n) measurements suffice to reconstruct x. Noise/Approximate sparsity: If the length-n source vector is the sum of any exactly k-sparse vector x and a "random" source noise vector z (and possibly y = A(x + z) also has a "random" noise vector e added to it), then as long as the noise variances are not "too much larger" than the signal power, the work of [16] demonstrates that O(k) measurements suffice (though the algorithms require time exponential in n).…”
Section: Number Of Measurementsmentioning
confidence: 99%
“…For instance, it is known that if x is exactly k-sparse then based on Reed-Solomon codes [11] one can efficiently reconstruct x with O(k) noiseless measurements (e.g. [12]) via algorithms with decoding time-complexity O(n log(n)), or via codes such as in [13], [14] with O(k) noiseless measurements with decoding time-complexity O(n). In the regime where k = θ(n), [15] show that O(k) = O(n) measurements suffice to reconstruct x. Noise/Approximate sparsity: If the length-n source vector is the sum of any exactly k-sparse vector x and a "random" source noise vector z (and possibly y = A(x + z) also has a "random" noise vector e added to it), then as long as the noise variances are not "too much larger" than the signal power, the work of [16] demonstrates that O(k) measurements suffice (though the algorithms require time exponential in n).…”
Section: Number Of Measurementsmentioning
confidence: 99%
“…(25) Proof 1 Follows by combing (20), (24), and recalling on the definition of γ introduced below (15). (1) with the null-space uniformly distributed in the Grassmanian.…”
Section: Casementioning
confidence: 99%
“…A particular way of solving (1) which will be the subject of this paper is l1-norm relaxation [5]. (More on different algorithms the interested reader can find in excellent references [1,4,20,18,19].) l1-norm relaxation proposes solving the following problem min x 1 subject to Ax = y.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…are legitimate choices for CS, carefully designed matrices can lead to additional benefits for the sparse compression/recovery. A few examples are faster encoding time and recovery algorithms for sparse matrices and in particular expander graphs [6,7,8,9], higher recovery thresholds for measurement matrices based on Reed-Solomon codes [11,14] etc.…”
Section: Introductionmentioning
confidence: 99%