An estimation problem of fundamental interest is that of phase (or angular) synchronization, in which the goal is to recover a collection of phases (or angles) using noisy measurements of relative phases (or angle offsets). It is known that in the Gaussian noise setting, the maximum likelihood estimator (MLE) has an expected squared ℓ2-estimation error that is on the same order as the Cramér-Rao lower bound. Moreover, even though the MLE is an optimal solution to a non-convex quadratic optimization problem, it can be found with high probability using semidefinite programming (SDP), provided that the noise power is not too large. In this paper, we study the estimation and convergence performance of a recentlyproposed low-complexity alternative to the SDP-based approach, namely, the generalized power method (GPM). Our contribution is twofold. First, we bound the rate at which the estimation error decreases in each iteration of the GPM and use this bound to show that all iterates-not just the MLE-achieve an estimation error that is on the same order as the Cramér-Rao bound. Our result holds under the least restrictive assumption on the noise power and gives the best provable bound on the estimation error known to date. It also implies that one can terminate the GPM at any iteration and still obtain an estimator that has a theoretical guarantee on its estimation error. Second, we show that under the same assumption on the noise power as that for the SDP-based method, the GPM will converge to the MLE at a linear rate with high probability. This answers a question raised in [3] and shows that the GPM is competitive in terms of both theoretical guarantees and numerical efficiency with the SDP-based method. At the heart of our convergence rate analysis is
We propose a new family of inexact sequential quadratic approximation (SQA) methods, which we call the inexact regularized proximal Newton (IRPN) method, for minimizing the sum of two closed proper convex functions, one of which is smooth and the other is possibly non-smooth. Our proposed method features strong convergence guarantees even when applied to problems with degenerate solutions while allowing the inner minimization to be solved inexactly. Specifically, we prove that when the problem possesses the so-called Luo-Tseng error bound (EB) property, IRPN converges globally to an optimal solution, and the local convergence rate of the sequence of iterates generated by IRPN is linear, superlinear, or even quadratic, depending on the choice of parameters of the algorithm. Prior to this work, such EB property has been extensively used to establish the linear convergence of various firstorder methods. However, to the best of our knowledge, this work is the first to use the Luo-Tseng EB property to establish the superlinear convergence of SQA-type methods for non-smooth convex minimization. As a consequence of our result, IRPN is capable of solving regularized regression or classifica-
In this paper we consider the cubic regularization (CR) method for minimizing a twice continuously differentiable function. While the CR method is widely recognized as a globally convergent variant of Newton's method with superior iteration complexity, existing results on its local quadratic convergence require a stringent non-degeneracy condition. We prove that under a local error bound (EB) condition, which is much weaker a requirement than the existing non-degeneracy condition, the sequence of iterates generated by the CR method converges at least Q-quadratically to a second-order critical point. This indicates that adding a cubic regularization not only equips Newton's method with remarkable global convergence properties but also enables it to converge quadratically even in the presence of degenerate solutions. As a byproduct, we show that without assuming convexity, the proposed EB condition is equivalent to a quadratic growth condition, which could be of independent interest. To demonstrate the usefulness and relevance of our convergence analysis, we focus on two concrete nonconvex optimization problems that arise in phase retrieval and low-rank matrix recovery, respectively, and prove that with overwhelming probability, the sequence of iterates generated by the CR method for solving these two problems converges at least Q-quadratically to a global minimizer. We also present numerical results of the CR method when applied to solve these two problems to support and complement our theoretical development.
The vast majority of zeroth order optimization methods try to imitate first order methods via some smooth approximation of the gradient. Here, the smaller the smoothing parameter, the smaller the gradient approximation error. We show that for the majority of zeroth order methods this smoothing parameter can however not be chosen arbitrarily small as numerical cancellation errors will dominate. As such, theoretical and numerical performance could differ significantly. Using classical tools from numerical differentiation we will propose a new smoothed approximation of the gradient that can be integrated into general zeroth order algorithmic frameworks. Since the proposed smoothed approximation does not suffer from cancellation error, the smoothing parameter (and hence the approximation error) can be made arbitrarily small. Sublinear convergence rates for algorithms based on our smoothed approximation are proved. Numerical experiments are also presented to demonstrate the superiority of algorithms based on the proposed approximation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.