“…This approach was promptly followed by many authors, mainly in conjunction with SLP (sequential linear programming), SQP and interior-point type methods (see, for instance, [1,5,6,7,9,11,12,15,16,17,22,23,24,25]). …”
Section: The Filter Methodsmentioning
confidence: 99%
“…In this case, s c is only accepted if A opt red /P opt red > γ g is satisfied. Most filter algorithms, such as those presented in [5,7,9,16,17,22,23] include similar tests.…”
Section: Mixing Merit Function and Filter Ideasmentioning
Abstract.A sequential quadratic programming algorithm for solving nonlinear programming problems is presented. The new feature of the algorithm is related to the definition of the merit function. Instead of using one penalty parameter per iteration and increasing it as the algorithm progresses, we suggest that a new point is to be accepted if it stays sufficiently below the piecewise linear function defined by some previous iterates on the ( f, C 2 2 )-space. Therefore, the penalty parameter is allowed to decrease between successive iterations. Besides, one need not to decide how to update the penalty parameter. This approach resembles the filter method introduced by Fletcher and Leyffer [Math. Program., 91 (2001), pp. 239-269], but it is less tolerant since a merit function is still used. Numerical comparison with standard methods shows that this strategy is promising. 65K05, 90C55, 90C30, 90C26.
Mathematical subject classification:
“…This approach was promptly followed by many authors, mainly in conjunction with SLP (sequential linear programming), SQP and interior-point type methods (see, for instance, [1,5,6,7,9,11,12,15,16,17,22,23,24,25]). …”
Section: The Filter Methodsmentioning
confidence: 99%
“…In this case, s c is only accepted if A opt red /P opt red > γ g is satisfied. Most filter algorithms, such as those presented in [5,7,9,16,17,22,23] include similar tests.…”
Section: Mixing Merit Function and Filter Ideasmentioning
Abstract.A sequential quadratic programming algorithm for solving nonlinear programming problems is presented. The new feature of the algorithm is related to the definition of the merit function. Instead of using one penalty parameter per iteration and increasing it as the algorithm progresses, we suggest that a new point is to be accepted if it stays sufficiently below the piecewise linear function defined by some previous iterates on the ( f, C 2 2 )-space. Therefore, the penalty parameter is allowed to decrease between successive iterations. Besides, one need not to decide how to update the penalty parameter. This approach resembles the filter method introduced by Fletcher and Leyffer [Math. Program., 91 (2001), pp. 239-269], but it is less tolerant since a merit function is still used. Numerical comparison with standard methods shows that this strategy is promising. 65K05, 90C55, 90C30, 90C26.
Mathematical subject classification:
“…• Interior point optimizer (IPOPT), see (Wächter and Biegler, 2006) • Filter-SQP Algorithm (Fletcher, Leyffer, and Toint, 2002) • L-BFGS-B or Algorithm 778 (Zhu, Byrd, Lu, and Nocedal, 1997) • NLOPT (Johnson, 2010) • SCIP (Achterberg, 2009) • The standard genetic algorithm (GA) of MATLAB (Goldberg and Holland, 1988) 1000 repeated runs were completed with the randomized parameters for each algorithm Table 3: Comparison of various optimization methods regarding optimal power flow computation .…”
Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte.
Terms of use:
Documents in
Abstract.A common question asked by users of direct search algorithms is how to use derivative information at iterates where it is available. This paper addresses that question with respect to Generalized Pattern Search (GPS) methods for unconstrained and linearly constrained optimization. Specifically this paper concentrates on the GPS POLL step. Polling is done to certify the need to refine the current mesh, and it requires O(n) function evaluations in the worst case. We show that the use of derivative information significantly reduces the maximum number of function evaluations necessary for POLL steps, even to a worst case of a single function evaluation with certain algorithmic choices given here. Furthermore, we show that rather rough approximations to the gradient are sufficient to reduce the POLL step to a single function evaluation. We prove that using these less expensive POLL steps does not weaken the known convergence properties of the method, all of which depend only on the POLL step.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.