Abstract-Group-based sparsity models are instrumental in linear and non-linear regression problems. The main premise of these models is the recovery of "interpretable" signals through the identification of their constituent groups, which can also provably translate in substantial savings in the number of measurements for linear models in compressive sensing. In this paper, we establish a combinatorial framework for group-model selection problems and highlight the underlying tractability issues. In particular, we show that the group-model selection problem is equivalent to the well-known NP-hard weighted maximum coverage problem. Leveraging a graph-based understanding of group models, we describe group structures that enable correct model selection in polynomial time via dynamic programming. Furthermore, we show that popular group structures can be explained by linear inequalities involving totally unimodular matrices, which afford other polynomial time algorithms based on relaxations. We also present a generalization of the group model that allows for within group sparsity, which can be used to model hierarchical sparsity. Finally, we study the Pareto frontier between approximation error and sparsity budget of group-sparse approximations for two tractable models, among which the tree sparsity model, and illustrate selection and computation tradeoffs between our framework and the existing convex relaxations.
Generalized Orthogonal Matching Pursuit (gOMP) is a natural extension of OMP algorithm where unlike OMP, it may select N (≥ 1) atoms in each iteration. In this paper, we demonstrate that gOMP can successfully reconstruct a K-sparse signal from a compressed measurement y = Φx by K th iteration if the sensing matrix Φ satisfies restricted isometry property (RIP) of order N K where δNK < √ N √ K+2 √ N . Our bound offers an improvement over the very recent result shown in [1]. Moreover, we present another bound for gOMP of order N K + 1 with δNK+1 < √ N √ K+ √ N which exactly relates to the near optimal bound of δK+1 < 1 √ K+1 for OMP (N = 1) as shown in [2].
In compressive sensing, one important parameter that characterizes the various greedy recovery algorithms is the iteration bound which provides the maximum number of iterations by which the algorithm is guaranteed to converge. In this letter, we present a new iteration bound for CoSaMP by certain mathematical manipulations including formulation of appropriate sufficient conditions that ensure passage of a chosen support through the two selection stages of CoSaMP, "Augment" and "Update". Subsequently, we extend the treatment to the subspace pursuit (SP) algorithm. The proposed iteration bounds for both CoSaMP and SP algorithms are seen to be improvements over their existing counterparts, revealing that both CoSaMP and SP algorithms converge in fewer iterations than suggested by results available in literature.
Transmitter-receiver energy harvesting model is assumed, where both the transmitter and receiver are powered by random energy source. Given a fixed number of bits, the problem is to find the optimal transmission power profile at the transmitter and ON-OFF profile at the receiver to minimize the transmission time. Structure of the optimal offline strategy is derived together with an optimal offline policy. An online policy with competitive ratio of strictly less than two is also derived.Index Terms-Energy harvesting, offline algorithm, online algorithm, competitive ratio.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.