“…Although they generally need more iterations to achieve convergence than the Newton method, their numerical efficiency means that they are usually faster and, furthermore, tend to be more robust to the condition of the model and data. The OPTMUM procedure contains three such algorithms: the BFGS method due to Broyden (1967), Fletcher (1970), Goldfarb (1970) and Shanno (1970), the DFP method of Davidon (1968) and Fletcher and Powell (1963), and BFGS-SC, which is a modified BFGS algorithm in which the formula for the computation of the update of the Hessian estimate has been changed to make it scale free. In all three cases, the OPTMUM implementation of the algorithm uses the Cholesky factorization of the approximation to the Hessian in (3.3), i.e., H = C 0 C, before solution for d. The BFGS algorithm is the default choice in OPTMUM, while the other five are available as options.…”