An earlier paper proved the convergence of a variable stepsize Bregman operator splitting algorithm (BOSVS) for minimizing φ(Bu) + H(u), where H and φ are convex functions, and φ is possibly nonsmooth. The algorithm was shown to be relatively efficient when applied to partially parallel magnetic resonance image reconstruction problems. In this paper, the convergence rate of BOSVS is analyzed. When H(u) = Au − f 2 , where A is a matrix, it is shown that for an ergodic approximation u k obtained by averaging k BOSVS iterates, the error in the objective value φ(Bu k ) + H(u k ) is O(1/k). When the optimization problem has a unique solution u * , we obtain the estimateThe theoretical analysis is compared to observed convergence rates for partially parallel magnetic resonance image reconstruction problems where A is a large dense ill-conditioned matrix.
The gradient descent method minimizes an unconstrained nonlinear optimization problem with O(1/ √ K ), where K is the number of iterations performed by the gradient method. Traditionally, this analysis is obtained for smooth objective functions having Lipschitz continuous gradients. This paper aims to consider a more general class of nonlinear programming problems in which functions have Hölder continuous gradients. More precisely, for any function f in this class, denoted by C 1,ν L , there is a ν ∈ (0, 1] and L > 0 such that for all x, y ∈ R n the relation ∇ f (x) − ∇ f (y) ≤ L x − y ν holds. We prove that the gradient descent method converges globally to a stationary point and exhibits a convergence rate of O(1/K ν ν+1 ) when the step-size is chosen properly, i.e., less than [ ν+1 L ]1 ν ∇ f (x k ) 1 ν −1 . Moreover, the algorithm employs O(1/ 1 ν +1 ) number of calls to an oracle to findx such that ∇ f (x) < .
An alternating direction approximate Newton (ADAN) method is developed for solving inverse problems of the form min{φ(Bu) + (1/2) Au − f 2 2 }, where φ is convex and possibly nonsmooth, and A and B are matrices. Problems of this form arise in image reconstruction where A is the matrix describing the imaging device, f is the measured data, φ is a regularization term, and B is a derivative operator. The proposed algorithm is designed to handle applications where A is a large dense, ill-conditioned matrix. The algorithm is based on the alternating direction method of multipliers (ADMM) and an approximation to Newton's method in which a term in Newton's Hessian is replaced by a Barzilai-Borwein (BB) approximation. It is shown that ADAN converges to a solution of the inverse problem. Numerical results are provided using test problems from parallel magnetic resonance imaging. ADAN was faster than a proximal ADMM scheme that does not employ a BB Hessian approximation, while it was more stable and much simpler than the related Bregman operator splitting algorithm with variable stepsize algorithm which also employs a BB-based Hessian approximation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.