Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. Second derivatives are assumed to be unavailable or too expensive to calculate.We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. The Hessian of the Lagrangian is approximated using a limited-memory quasi-Newton method.SNOPT is a particular implementation that uses a reduced-Hessian semidefinite QP solver (SQOPT) for the QP subproblems. It is designed for problems with many thousands of constraints and variables but is best suited for problems with a moderate number of degrees of freedom (say, up to 2000). Numerical results are given for most of the CUTEr and COPS test collections (about 1020 examples of all sizes up to 40000 constraints and variables, and up to 20000 degrees of freedom). become large), SNOPT enters nonlinear elastic mode and solves the problemis called a composite objective, and the penalty parameter γ (γ ≥ 0) may take a finite sequence of increasing values. If (NP) has a feasible solution and γ is sufficiently large, the solutions to (NP) and (NP(γ)) are identical. If (NP) has no feasible solution, (NP(γ)) will tend to determine a "good" infeasible point if γ is again sufficiently large. (If γ were infinite, the nonlinear constraint violations would be minimized subject to the linear constraints and bounds.) A similar 1 formulation of (NP) is used in the SQP method of Tone [98] and is fundamental to the S 1 QP algorithm of Fletcher [38]. See also Conn [25] and Spellucci [94]. An attractive feature is that only linear terms are added to (NP), giving no increase in the expected degrees of freedom at each QP solution. 1.2. The SQP Approach. An SQP method was first suggested by Wilson [102] for the special case of convex optimization. The approach was popularized mainly by Biggs [7], Han [66], and Powell [85, 87] for general nonlinear constraints. Further history of SQP methods and extensive bibliographies are given in [61, 39, 73, 78, 28]. For a survey of recent results, see Gould and Toint [65]. Several general-purpose SQP solvers have proved reliable and efficient during the last 20 years. For example, under mild conditions the solvers NLPQL [92], NPSOL [57, 60], and DONLP [95] typically find a (local) optimum from an arbitrary starting point, and they require relatively few evaluations of the problem functions and gradients compared to traditional solvers such as MINOS [75, 76, 77] and CONOPT [34, 2].SQP methods have been particularly successful in solving the optimization problems arising in optimal trajectory calculations. For many years, the optimal trajectory system OTIS (Hargraves and Paris [67]) has been...