2016
DOI: 10.1109/msp.2015.2486805
|View full text |Cite
|
Sign up to set email alerts
|

Compressive Covariance Sensing: Structure-based compressive sensing beyond sparsity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
122
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 129 publications
(123 citation statements)
references
References 47 publications
1
122
0
Order By: Relevance
“…First, in Section 4, we prove the first non-asymptotic analysis of a widely used estimation scheme that reads Θ( √ d) entries per vector according to a sparse ruler [Mof68,RATL16]. Sparse ruler methods are important because, up to constant factors, they minimize ESC among all methods that read a fixed subset of entries from each vector sample.…”
Section: Our Contributions In Briefmentioning
confidence: 99%
See 1 more Smart Citation
“…First, in Section 4, we prove the first non-asymptotic analysis of a widely used estimation scheme that reads Θ( √ d) entries per vector according to a sparse ruler [Mof68,RATL16]. Sparse ruler methods are important because, up to constant factors, they minimize ESC among all methods that read a fixed subset of entries from each vector sample.…”
Section: Our Contributions In Briefmentioning
confidence: 99%
“…Work on Toeplitz covariance estimation with the goal of minimizing entry sample complexity has focused on sparse ruler based sampling. This approach has been known for decades [Mof68,PBNH85] and is widely applied in signal processing applications: we refer the reader to [RATL16] for an excellent overview. There has been quite a bit of interest in finding rulers of minimal size -asymptotically, Θ( √ d) is optimal, but in many applications minimizing the leading constant is important [Lee56, Wic63,RSTL88].…”
Section: Related Workmentioning
confidence: 99%
“…Sparse rulers have received significant attention in covariance estimation applications [14,[23][24][25][26]. Given a sample x ∼ D with Toeplitz covariance matrix T , if we read the |R| entries of x corresponding to indices in a ruler R, we obtain an estimate of the covariance t s at every distance s. So in principle, with enough samples, x (1) , .…”
Section: Sparse Ruler Based Samplingmentioning
confidence: 99%
“…Suppose the measurements collected in y are perturbed by zero-mean white Gaussian noise with unit variance, then the least-squares solution has the inverse error covariance or the Fisher information matrix T(L) = E{(g −ĝ)(g −ĝ) H } = Ψ H (L)Ψ(L) that determines the quality of the estimatorsĝ. Therefore, we can use scalar functions of T(L) as a figure of merit to propose the sparse tensor sampling problem (13) where with "optimize" we mean either "maximize" or "minimize" depending on the choice of the scalar function f {·}. Solving (13) is not trivial due to the cardinality constraints.…”
Section: Problem Modelingmentioning
confidence: 99%