The theme for this thesis is the application of the inverse problem framework with sparsity-enforcing regularization to passive source localization in sensor array processing. The approach involves reformulating the problem in an optimization framework by using an overcomplete basis, and applying sparsifying regularization, thus focusing the signal energy to achieve excellent resolution. We develop numerical methods for enforcing sparsity by using fi and 4p regularization. We use the second order cone programming framework for fi regularization, which allows efficient solutions using interior point methods. For the f, counterpart, the numerical solution is based on halfquadratic regularization. We propose several approaches of using multiple time samples of sensor outputs in synergy, and a method for the automatic choice of the regularization parameter. We conduct extensive numerical experiments analyzing the behavior of our approach and comparing it to existing source localization methods. This analysis demonstrates that our approach has important advantages such as superresolution, robustness to noise and limited data, robustness to correlation of the sources and lack of need for accurate initialization. The approach is also extended to allow self-calibration of sensor position errors by using a procedure similar in spirit to block-coordinate descent on an augmented objective function including both the locations of the sources and the positions of the sensors.The second direction of the work done in the thesis, which is intimately related to our approach for source localization, is theoretical analysis of the noiseless signal representation problem using overcomplete bases. Questions considered in this analysis include the uniqueness of solutions to the noiseless Lo problem, and the equivalence of solutions of the fo, L 1 and 4p problems. We consider an arbitrary overcomplete basis, and we show that under certain sparsity conditions on the underlying signal, such uniqueness and equivalence holds.
In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases such as sparse vectors and low-rank matrices, as well as several others including sums of a few permutations matrices, low-rank tensors, orthogonal matrices, and atomic measures. The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery. Thus this work extends the catalog of simple models that can be recovered from limited linear information via tractable convex programming
Abstract. Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components by minimizing a linear combination of the l 1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and column spaces, and we use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.Key words. matrix decomposition, convex relaxation, l 1 norm minimization, nuclear norm minimization, uncertainty principle, semidefinite programming, rank, sparsity AMS subject classifications. 90C25, 90C22, 90C59, 93B30 DOI. 10.1137/0907617931. Introduction. Complex systems and models arise in a variety of problems in science and engineering. In many applications such complex systems and models are often composed of multiple simpler systems and models. Therefore, in order to better understand the behavior and properties of a complex system, a natural approach is to decompose the system into its simpler components. In this paper we consider matrix representations of systems and statistical models in which our matrices are formed by adding together sparse and low-rank matrices. We study the problem of recovering the sparse and low-rank components given no prior knowledge about the sparsity pattern of the sparse matrix or the rank of the low-rank matrix. We propose a tractable convex program to recover these components and provide sufficient conditions under which our procedure recovers the sparse and low-rank matrices exactly.Such a decomposition problem arises in a number of settings, with the sparse and low-rank matrices having different interpretations depending on the application. In a statistical model selection setting, the sparse matrix can correspond to a Gaussian graphical model [19], and the low-rank matrix can summarize the effect of latent, unobserved variables. Decomposing a given model into these simpler components is useful for developing efficient estimation and inference algorithms. In computational complexity, the notion of matrix rigidity [31] captures the smallest number of entries
Abstract-We propose a shape-based approach to curve evolution for the segmentation of medical images containing known object types. In particular, motivated by the work of Leventon, Grimson, and Faugeras [15], we derive a parametric model for an implicit representation of the segmenting curve by applying principal component analysis to a collection of signed distance representations of the training data. The parameters of this representation are then manipulated to minimize an objective function for segmentation. The resulting algorithm is able to handle multidimensional data, can deal with topological changes of the curve, is robust to noise and initial contour placements, and is computationally efficient. At the same time, it avoids the need for point correspondences during the training phase of the algorithm. We demonstrate this technique by applying it to two medical applications; two-dimensional segmentation of cardiac magnetic resonance imaging (MRI) and three-dimensional segmentation of prostate MRI.Index Terms-Active contours, binary image alignment, cardiac MRI segmentation, curve evolution, deformable model, distance transforms, eigenshapes, implicit shape representation, medical image segmentation, parametric shape model, principal component analysis, prostate segmentation, shape prior, statistical shape model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.