The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that unveils a missing link between CS and array signal processing. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal l0-bound with finite number of snapshots. multiple measurement vector (MMV) problem is formulated as:The MMV problem also has many important applications such as distributed compressive sensing [25], direction-of-arrival estimation in radar [26], magnetic resonance imaging with multiple coils [27], diffuse optical tomography using multiple illumination patterns [28, 29], etc. Currently, greedy algorithms such as S-OMP (simultaneous orthogonal matching pursuit) [21, 30], convex relaxation methods using mixed norm [31, 32], M-FOCUSS [22], M-SBL (Multiple Sparse Bayesian Learning) [33], randomized algorithms such as REduce MMV and BOost (ReMBo)[23], and model-based compressive sensing using block-sparsity [34, 35] have also been applied to the MMV problem within the context of compressive sensing.In MMV, thanks to the common sparse support, it is quite predictable that the recoverable sparsity level may increase with the increasing number of measurement vectors. More specifically, given a sensing matrix A, let spark(A) denote the smallest number of linearly dependent columns of A. Then, according to Chen and Huo [21], Feng and Bresler [36], if X ∈ R n×r satisfies AX = B and X 0 < spark(A) + rank(B) − 1 2 ≤ spark(A) − 1, (I.3) then X is the unique solution of (I.2). In (I.3), the last inequality comes from the observation that rank(B) ≤ X * 0 := |suppX * |. Recently, Davies and Eldar showed that (I.3) is indeed a necessary codition for X to be a unique solution for AX = B [37]. Compared to the SMV case (rank(B) = 1), (I.3) informs us that the recoverable sparsity level increases with the number of measurement vectors. Furthermore, average case analysis [38] and information theoretic analysis [39] have indicated the performance improvement...
Dynamic tracking of sparse targets has been one of the important topics in array signal processing. Recently, compressed sensing (CS) approaches have been extensively investigated as a new tool for this problem using partial support information obtained by exploiting temporal redundancy. However, most of these approaches are formulated under single measurement vector compressed sensing (SMV-CS) framework, where the performance guarantees are only in a probabilistic manner. The main contribution of this paper is to allow deterministic tracking of time varying supports with multiple measurement vectors (MMV) by exploiting multi-sensor diversity. In particular, we show that a novel compressive MUSIC (CS-MUSIC) algorithm with optimized partial support selection not only allows removal of inaccurate portion of previous support estimation but also enables addition of newly emerged part of unknown support. Numerical results confirm the theory.
The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. The MMV problem has been traditionally addressed either by sensor array signal processing or compressive sensing. However, recent breakthroughs in this area such as compressive MUSIC (CS-MUSIC) or subspace-augumented MUSIC (SA-MUSIC) optimally combine the compressive sensing (CS) and array signal processing such that k−r supports are first found by CS and the remaining r supports are determined by a generalized MUSIC criterion, where k and r denote the sparsity and the number of independent snapshots, respectively. Even though such a hybrid approach significantly outperforms the conventional algorithms, its performance heavily depends on the correct identification of k − r partial support by the compressive sensing step, which often deteriorates the overall performance. The main contribution of this paper is, therefore, to show that as long as k − r + 1 correct supports are included in any ksparse CS solution, the optimal k − r partial support can be found using a subspace fitting criterion, significantly improving the overall performance of CS-MUSIC. Furthermore, unlike the single measurement CS counterpart that requires infinite SNR for a perfect support recovery, we can derive an information theoretic sufficient condition for the perfect recovery using CS-MUSIC under a finite SNR scenario.
In a multiple measurement vector problem (MMV), where multiple signals share a common sparse support and are sampled by a common sensing matrix, we can expect joint sparsity to enable a further reduction in the number of required measurements. While a diversity gain from joint sparsity had been demonstrated earlier in the case of a convex relaxation method using an l 1 /l 2 mixed norm penalty, only recently was it shown that similar diversity gain can be achieved by greedy algorithms if we combine greedy steps with a MUSIC-like subspace criterion. However, the main limitation of these hybrid algorithms is that they often require a large number of snapshots or a high signal-to-noise ratio (SNR) for an accurate subspace as well as partial support estimation. One of the main contributions of this work is to show that the noise robustness of these algorithms can be significantly improved by allowing sequential subspace estimation and support filtering, even when the number of snapshots is insufficient. Numerical simulations show that a novel sequential compressive MUSIC (sequential CS-MUSIC) that combines the sequential subspace estimation and support filtering steps significantly outperforms the existing greedy algorithms and is quite comparable with computationally expensive state-of-art algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.