2013
DOI: 10.1109/access.2013.2272664
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Estimation of Time-Varying Sparse Signals

Abstract: We consider the problem of adaptively designing compressive measurement matrices for estimating time-varying sparse signals. We formulate this problem as a partially observable Markov decision process. This formulation allows us to use Bellman's principle of optimality in the implementation of multistep lookahead designs of compressive measurements. We compare the performance of adaptive versus traditional non-adaptive designs and study the value of multi-step (non-myopic) versus one-step (myopic) lookahead ad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 45 publications
0
9
0
Order By: Relevance
“…If the action space contains a large number of discrete actions, evaluating each might not be feasible. An option to speed up the search is then to prune this space prior to the evaluation, which has been performed in [18], [19].…”
Section: Search For the Optimal Actionmentioning
confidence: 99%
“…If the action space contains a large number of discrete actions, evaluating each might not be feasible. An option to speed up the search is then to prune this space prior to the evaluation, which has been performed in [18], [19].…”
Section: Search For the Optimal Actionmentioning
confidence: 99%
“…The geometrical interpretation of the above problem is to select K columns of matrix H such that the norm of the projection of η onto the subspace spanned by the chosen columns is maximized. Adaptive algorithms such as those based on partially observable Markov decision processes have been proposed to find the optimal solution [19]. The computation complexity for adaptive algorithms is in general quite high despite the reduction brought by approximation methods such as rollout.…”
Section: ) Linear Minimum Mean Squared Error Estimatormentioning
confidence: 99%
“…Proof: Let Y * be the optimum of (19), then rank(Y * 11 ) = n and it satisfies both equality and inequality constraints. Therefore, it belongs to the feasible set of (20). Furthermore, for every point Y in the feasible set of (20) with the rank greater than n, we have…”
Section: Lemma V1 Consider the Following Rank Constrained Cardinality...mentioning
confidence: 99%
“…The problem of minimizing the number of nonzero elements of a vector/matrix subject to a set of constraints in inherently NP-hard and arises in many fields, such as Compressive Sensing (CS) where the inherent sparseness of signals is exploited in determining them from relatively few measurements [18]. Since the advent of Compressive Sensing, considerable work has been done on the design of compressive measurement matrices based on different criteria such as sparse signal support detection and estimation [19], [20], sparse signal detection and classification [21], [22], etc.…”
mentioning
confidence: 99%