The Hard Thresholding Pursuit algorithm for sparse recovery is revisited using a new theoretical analysis. The main result states that all sparse vectors can be exactly recovered from compressive linear measurements in a number of iterations at most proportional to the sparsity level as soon as the measurement matrix obeys a certain restricted isometry condition. The recovery is also robust to measurement error. The same conclusions are derived for a variation of Hard Thresholding Pursuit, called Graded Hard Thresholding Pursuit, which is a natural companion to Orthogonal Matching Pursuit and runs without a prior estimation of the sparsity level. In addition, for two extreme cases of the vector shape, it is shown that, with high probability on the draw of random measurements, a fixed sparse vector is robustly recovered in a number of iterations precisely equal to the sparsity level. These theoretical findings are experimentally validated, too.Key words and phrases: compressive sensing, uniform sparse recovery, nonuniform sparse recovery, random measurements, iterative algorithms, hard thresholding.
In this paper, we examine approaches for reducing the complexity of evolving fuzzy systems (EFSs) by eliminating local redundancies during training, evolving the models on on-line data streams. Thus, the complexity reduction steps should support fast incremental single-pass processing steps. In EFSs, such reduction steps are important due to several reasons: (1) originally distinct rules representing distinct local regions in the input/output data space may move together over time and get significantly over-lapping as data samples are filling up the gaps in-between these, (2) two or several fuzzy sets in the fuzzy partitions may become redundant because of projecting high-dimensional clusters onto the single axes, (3) they can be also seen as a first step towards a better readability and interpretability of fuzzy systems, as unnecessary information is discarded and the models are made more transparent. One technique is performing a new rule merging approach directly in the product cluster space using a novel concept for calculating the similarity degree between an updated rule and the remaining ones. Inconsistent rules elicited by comparing the similarity of two redundant rule antecedent parts with the similarity of their consequents are specifically handled in the merging procedure. The second one is operating directly in the fuzzy partition space, where redundant fuzzy sets are merged based on their joint a-cut levels. Redundancy is measured by a novel kernel-based similarity measure. The complexity reduction approaches are evaluated based on high-dimensional noisy real-world measurements and an artificially generated data stream containing 1.2 million samples. Based on this empirical comparison, it will be shown that the novel techniques are (1) fast enough in order to cope with on-line demands and (2) produce fuzzy systems with less structural components while at the same time achieving accuracies similar to EFS not integrating any reduction steps.
We analyze a novel multi-level version of a recently introduced compressed sensing (CS) Petrov-Galerkin (PG) method from [H. Rauhut and Ch. Schwab: Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations, Math. Comp. 304(2017) 661-700] for the solution of many-parametric partial differential equations. We propose to use multi-level PG discretizations, based on a hierarchy of nested finite dimensional subspaces, and to reconstruct parametric solutions at each level from level-dependent random samples of the high-dimensional parameter space via CS methods such as weighted ℓ 1 -minimization. For affine parametric, linear operator equations, we prove that our approach allows to approximate the parametric solution with (almost) optimal convergence order as specified by certain summability properties of the coefficient sequence in a general polynomial chaos expansion of the parametric solution and by the convergence order of the PG discretization in the physical variables. The computations of the parameter samples of the PDE solution is "embarrasingly parallel", as in Monte-Carlo Methods. Contrary to other recent approaches, and as already noted in [A. Doostan and H. Owhadi: A non-adapted sparse approximation of PDEs with stochastic inputs. JCP 230(2011) 3015-3034] the optimality of the computed approximations does not require a-priori assumptions on ordering and structure of the index sets of the largest gpc coefficients (such as the "downward closed" property). We prove that under certain assumptions work versus accuracy of the new algorithms is asymptotically equal to that of one PG solve for the corresponding nominal problem on the finest discretization level up to a constant.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.