Optimization algorithms are ubiquitous across all branches of engineering, computer science, and applied mathematics. Typically, such algorithms are iterative in nature and seek to approximate the solution to a difficult optimization problem. The general form of an optimization problem is towhere x ∈ R d is the set of d real decision variables, C is the feasible set, and f : R d → R is the objective function. The goal is to find x ∈ C such that f (x) is as small as possible. For example, in structural mechanics, x could be lengths and widths of the members in a truss design, C could encode stress limitations of the materials and load bearing constraints, and f could be the total cost of the design. Solving this optimization problem would provide the cheapest truss design that satisfies all design specifications. In statistics, x could be parameters in a statistical model, C could correspond to constraints such as certain parameters being positive, and f could be the negative log-likelihood of the model given the observed data. Solving this optimization problem finds the maximum likelihood model. While some optimization problems can be solved analytically (such as least squares problems), analytical solutions do not exist for most optimization models, and numerical methods must be used to approximate the solution. Even when an analytical solution exists, it may be preferable to use numerical methods because they can be more computationally efficient. Iterative optimization algorithms typically begin with an estimate x 0 of the solution, and each step refines the estimate, producing a sequence x 0 , x 1 , . . . . If properly designed, then x k → x in the limit, and as many iterations as needed can be used to achieve the desired level of accuracy. Depending on the nature and structure of f and C in the optimization problem, different optimization algorithms may be appropriate. The design, selection, and tuning of optimization algorithms is more of an art than a science. Many different algorithms have been proposed, some with theoretical guarantees and others with a strong record of empirical performance. In practice, some algorithms converge slowly yet are predictable and reliable. Meanwhile, other algorithms converge more rapidly on average, but can fail spectacularly in some cases. Algorithm selection and tuning is typically performed by experts with deep area knowledge. In many ways, iterative algorithms behave like control systems, and the choice of algorithm is akin to the choice of controller. In the sections that follow, we formalize this connection and describe how optimization algorithms can be viewed as controllers performing robust control. This work will also show how dissipativity theory can be used to analyze the performance of many classes of optimization algorithms. This allows selection and tuning of optimization algorithms to be performed in an automated and systematic way.