Abstract. Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemporary optimization theory. This essay presents a personal choice of some central mathematical ideas, outlined for the broad optimization community. I discuss the convex analysis of spectral functions and invariant matrix norms, touching briefly on semidefinite representability, and then outlining two broader algebraic viewpoints based on hyperbolic polynomials and Lie algebra. Analogous nonconvex notions lead into eigenvalue perturbation theory. The last third of the article concerns stability, for polynomials, matrices, and associated dynamical systems, ending with a section on robustness. The powerful and elegant language of nonsmooth analysis appears throughout, as a unifying narrative thread.Key words. Eigenvalue optimization -convexity -nonsmooth analysis -duality -semidefinite program -subdifferential -Clarke regular -chain rule -sensitivity -eigenvalue perturbation -partly smooth -spectral function -unitarily invariant norm -hyperbolic polynomial -stability -robust control -pseudospectrum -H∞ norm PART I: INTRODUCTION
Von Neumann and invariant matrix normsBefore outlining this survey, I would like to suggest its flavour with a celebrated classical result. This result, von Neumann's characterization of unitarily invariant matrix norms, serves both as a historical jumping-off point and as an elegant juxtaposition of the central ingredients of this article.Von Neumann [64] was interested in unitarily invariant norms · on the vector space M n of n-by-n complex matrices:where U n denotes the group of unitary matrices. The singular value decomposition shows that the invariants of a matrix X under unitary transformations of the form X → U XV are given by the singular values σ 1 (X) ≥ σ 2 (X) ≥ · · · ≥ σ n (X), the eigenvalues of the matrix √ X * X. Hence any invariant norm · must be a function of the vector σ(X), so we can write X = g(σ(X)) for some function g : R n → R. We ensure this last equation if we define g(x) = Diag x for vectors x ∈ R n , and in that case g is a symmetric gauge function: that is, g is a norm on R n whose value is invariant under permutations and sign changes of the components. Von Neumann's beautiful insight was that this simple necessary condition is in fact also sufficient.Theorem 1.1 (von Neumann, 1937). The unitarily invariant matrix norms are exactly the symmetric gauge functions of the singular values.Von Neumann's proof rivals the result in elegance. He proceeds by calculating the dual norm of a matrix Y ∈ M n ,where the real inner product X, Y in M n is the real part of the trace of X * Y . Specifically, he shows that any symmetric gauge function g satisfies the simple duality relationshipwhere g * is the norm on R n dual to g. From this, the hard part of his result follows:...