1994
DOI: 10.1007/bf03167225
|View full text |Cite
|
Sign up to set email alerts
|

A convergent secant method for constrained optimization

Abstract: In this paper we combine a secant method with a trust region strategy so that the resulting algorithm not only has a local two-step superlinear rate, but also globally converges to Karush-Kuhn-Tucker points. The condition for proving these convergence properties is weaker than some trust region methods which use reduced Hessian asa tool. A minor revision of this algorithm is shown to possess a one-step superlinear rate.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

1999
1999
2004
2004

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
references
References 12 publications
0
0
0
Order By: Relevance