Recently, similar to Hager and Zhang (SIAM J Optim 16:170-192, 2005), Yu (Nonlinear self-scaling conjugate gradient methods for large-scale optimization problems. Thesis of Doctors Degree, Sun Yat-Sen University, 2007) and Yuan (Optim Lett 3:11-21, 2009) proposed modified PRP conjugate gradient methods which generate sufficient descent directions without any line searches. In order to obtain the global convergence of their algorithms, they need the assumption that the stepsize is bounded away from zero. In this paper, we take a little modification to these methods such that the modified methods retain sufficient descent property. Without requirement of the positive lower bound of the stepsize, we prove that the proposed methods are globally convergent. Some numerical results are also reported.
In this paper, we propose a box-constrained differentiable penalty method for nonlinear complementarity problems, which not only inherits the same convergence rate as the existing 1 p -penalty method but also overcomes its disadvantage of non-Lipschitzianness.We introduce the concept of a uniform ξ -P-function with ξ ∈ (1, 2], and apply it to prove that the solution of box-constrained penalized equations converges to that of the original problem at an exponential order. Instead of solving the box-constrained penalized equations directly, we solve a corresponding differentiable least squares problem by using a trust-region Gauss-Newton method. Furthermore, we establish the connection between the local solution of the least squares problem and that of the original problem under mild conditions. We carry out the numerical experiments on the test problems from MCPLIB, and show that the proposed method is efficient and robust.
In this paper, we study inequality constrained nonlinear programming problems by virtue of an 1 2-penalty function and a quadratic relaxation. Combining with an interior-point method, we propose an interior-point 1 2-penalty method. We introduce different kinds of constraint qualifications to establish the first-order necessary conditions for the quadratically relaxed problem. We apply the modified Newton method to a sequence of logarithmic barrier problems, and design some reliable algorithms. Moreover, we establish the global convergence results of the proposed method. We carry out numerical experiments on 266 inequality constrained optimization problems. Our numerical results show that the proposed method is competitive with some existing interior-point 1-penalty methods in term of iteration numbers and better when comparing the values of the penalty parameter.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.