1968
DOI: 10.1093/comjnl/10.4.406
|View full text |Cite
|
Sign up to set email alerts
|

Variance algorithm for minimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
82
0
1

Year Published

1973
1973
2005
2005

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 487 publications
(83 citation statements)
references
References 4 publications
0
82
0
1
Order By: Relevance
“…In the output file (see Appendix B) the result array is saved as an ASCII vector; 2. The basic idea of the Davidon Variance Algorithm (Davidon 1968) is to calculate the covariance matrix by an iterative algorithm. The matrix is obtained by using only the function values and the gradient, and the minimum value is calculated simultaneously as the algorithm converges.…”
Section: Minimizing the Negative Log-likelihood Functionmentioning
confidence: 99%
“…In the output file (see Appendix B) the result array is saved as an ASCII vector; 2. The basic idea of the Davidon Variance Algorithm (Davidon 1968) is to calculate the covariance matrix by an iterative algorithm. The matrix is obtained by using only the function values and the gradient, and the minimum value is calculated simultaneously as the algorithm converges.…”
Section: Minimizing the Negative Log-likelihood Functionmentioning
confidence: 99%
“…Although they generally need more iterations to achieve convergence than the Newton method, their numerical efficiency means that they are usually faster and, furthermore, tend to be more robust to the condition of the model and data. The OPTMUM procedure contains three such algorithms: the BFGS method due to Broyden (1967), Fletcher (1970), Goldfarb (1970) and Shanno (1970), the DFP method of Davidon (1968) and Fletcher and Powell (1963), and BFGS-SC, which is a modified BFGS algorithm in which the formula for the computation of the update of the Hessian estimate has been changed to make it scale free. In all three cases, the OPTMUM implementation of the algorithm uses the Cholesky factorization of the approximation to the Hessian in (3.3), i.e., H = C 0 C, before solution for d. The BFGS algorithm is the default choice in OPTMUM, while the other five are available as options.…”
Section: Nonlinear Optimizationmentioning
confidence: 99%
“…O {hi (2)(3)(4)(5)(6)(7)(8)(9)(10)(11) If IV indicates the set of points at which violation of the jth type of constraint occur, then the 0 operator solves a "'local" optimization problem at the highest point, hn, in the set Ij. The optimization problem is to minimize…”
Section: {Yi)mentioning
confidence: 99%
“…{hi}O = {Ti + Snin) (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12) Greaves works with zero minimum clearance, but it is trivial to extend the procedure to non-zero values. The clearance constraint that the final set of points must satisfy is yi -Ti + cmi (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13) and is maintained during each iteration step by the "non-lowering" requirement:…”
Section: {Yi)mentioning
confidence: 99%