Saddle-point or primal-dual methods have recently attracted renewed interest as a systematic technique to design distributed algorithms which solve convex optimization problems. When implemented online for streaming data or as dynamic feedback controllers, these algorithms become subject to disturbances and noise; convergence rates provide incomplete performance information, and quantifying input-output performance becomes more important. We analyze the input-output performance of the continuous-time saddle-point method applied to linearly constrained quadratic programs, providing explicit expressions for the saddle-point H2 norm under a relevant input-output configuration. We then proceed to derive analogous results for regularized and augmented versions of the saddlepoint algorithm. We observe some rather peculiar effectsa modest amount of regularization significantly improves the transient performance, while augmentation does not necessarily offer improvement. We then propose a distributed dual version of the algorithm which overcomes some of the performance limitations imposed by augmentation. Finally, we apply our results to a resource allocation problem to compare the inputoutput performance of various centralized and distributed saddlepoint implementations and show that distributed algorithms may perform as well as their centralized counterparts.
I . I N T R O D U C T I O NSaddle-point methods are a class of continuous-time gradient-based algorithms for solving constrained convex optimization problems. Introduced in the early 1950s [1], [2], these algorithms are designed to seek the saddle points of the optimization problem's Lagrangian function. These saddle points are in one-to-one correspondence with the solutions of the first-order optimality (KKT) conditions, and the algorithm therefore drives its internal state towards the global optimizer of the convex program; see [3]-[5] for convergence results.Recently, these algorithms have attracted renewed attention for e.g., in the context of machine learning [6], in the control literature for solving distributed convex optimization problems [7], where agents cooperate through a communication network to solve an optimization problem with minimal or no centralized coordination. Applications of distributed optimization J. W. Simpson-Porco is with the include utility maximization [3], congestion management in communication networks [8], and control in power systems [9]- [14]. While most standard optimization algorithms require centralized information to compute the optimizer, saddle-point algorithms often yield distributed strategies in which agents perform state updates using only locally measured information and communication with some subset of other agents. We refer the reader to [4], [15]-[20] for control-theoretic interpretations of these algorithms.Rather than solve the optimization problem offline, it is desirable to run these distributed algorithms online as controllers, in feedback with system and/or disturbance measurements, to provide references so t...