except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publi cation of trade names, trademarks , service mark s, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whethe r or not they are subject to proprietary right s.Printed in the United States of America. 987 6 5 4 3 2 I springeronline.com (MVY) To our families PrefaceOptimization is a rich a nd t hriving mathem ati cal di scipline. P roperties of minimizers and m aximiz ers of functions rely inti m ately on a wealth of t echniques fro m m athem a t ical analysis, including tools from ca lculus and its generalizat ions, t opological no tions, and mor e geometric id eas. T he theory underl ying curren t computational op timization t echniques grows ever more sophisticated -duali ty-based algor ithms, interior point methods, and cont rol-theoretic app licati ons are typi cal examp les. The powerful and elegant langu age of convex analysis unifi es mu ch of t his t heory. Hen ce our aim of writing a concis e, accessible account of convex analys is a nd its applicat ions and exte ns ions , for a broad audience .For stude nts of op timization and an alysis , there is gre at b en efit t o blurrin g the distinction between t he two disciplines. Many important analytic problems have illuminating optimization formulations and hence can b e approached through our main variat ion al tools: subgradients and optimality condit ions, t he many guises of duality, metric regul arity and so for th. More generally, t he ide a of convexi ty is central t o the transition from classical analysis to various branches of modern analysis: fro m lin ear to nonlinear analysis, from smoot h to nonsmooth, and from the study of functions to mul tifunction s. Thus, although we use cert ain optimization models repe atedly to illustrate the main results (models such as linear and semidefinit e programming du ali ty and cone pol arity) , we constant ly emphasize the power of abstract models and notation.Good refer ence works on finit e-dimensional convex a nalysis already exist . Rockafellar's classic Conv ex Analysis [167] has b een indispensable and ubiquitous since t he 1970s, and a more general sequel with Wets, Variational Analysis [168], app eared recently. Hiriart-Urruty and Lemar ech al 's Convex Analysis and Minimization Algorithms [97] is a compreh ensive but gentler introduction. Our goal is not t o suppla nt these works, but on the cont rary to promote them , a nd ther eby to motivate future res ear chers. This book aims to make converts. vii viii Preface We t ry to be succinct rather t han systematic, avo id ing b ecoming bo gged down in tec hnical det ails. Our style is relatively infor mal ; for exam ple, t he text of each sect ion creates the context for many of t he resul t stateme n...
Let f be a continuous function on R n , and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f is not differentiable. Of particular interest is the case where f is not convex, and perhaps not even locally Lipschitz, but whose gradient is easily computed where it is defined. We present a practical, robust algorithm to locally minimize such functions, based on gradient sampling. No subgradient information is required by the algorithm.When f is locally Lipschitz and has bounded level sets, and the sampling radius ǫ is fixed, we show that, with probability one, the algorithm generates a sequence with a cluster point that is Clarke ǫ-stationary. Furthermore, we show that if f has a unique Clarke stationary pointx, then the set of all cluster points generated by the algorithm converges tox as ǫ is reduced to zero.
We establish the following result: if the graph of a (nonsmooth) real-extended-valued function f : R n → R ∪ {+∞} is closed and admits a Whitney stratification, then the norm of the gradient of f at x ∈ dom f relative to the stratum containing x bounds from below all norms of Clarke subgradients of f at x. As a consequence, we obtain some Morse-Sard type theorems as well as a nonsmooth Kurdyka-Lojasiewicz inequality for functions definable in an arbitrary o-minimal structure.
The idea of a finite collection of closed sets having "linearly regular intersection" at a point is crucial in variational analysis. This central theoretical condition also has striking algorithmic consequences: in the case of two sets, one of which satisfies a further regularity condition (convexity or smoothness for example), we prove that von Neumann's method of "alternating projections" converges locally to a point in the intersection, at a linear rate associated with a modulus of regularity. As a consequence, in the case of several arbitrary closed sets having linearly regular intersection at some point, the method of "averaged projections" converges locally at a linear rate to a point in the intersection. Inexact versions of both algorithms also converge linearly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.