In the paper, we give a smoothing approximation to the nondifferentiable exact penalty function for nonlinear constrained optimization problems. Error estimations are obtained among the optimal objective function values of the smoothed penalty problems, of the nonsmooth penalty problem and of the original problem. An algorithm based on our smoothing function is given, which is showed to be globally convergent under some mild conditions.
In this paper, a modified simple penalty function is proposed for a constrained nonlinear programming problem by augmenting the dimension of the program with a variable that controls the weight of the penalty terms. This penalty function enjoys improved smoothness. Under mild conditions, it can be proved to be exact in the sense that local minimizers of the original constrained problem are precisely the local minimizers of the associated penalty problem. MSC: 47H20; 35K55; 90C30
For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.
In this paper, an approximate smoothing approach to the non-differentiable exact penalty function is proposed for the constrained optimization problem. A simple smoothed penalty algorithm is given, and its convergence is discussed. A practical algorithm to compute approximate optimal solution is given as well as computational experiments to demonstrate its efficiency.
We consider a smooth penalty algorithm to solve nonconvex optimization problem based on a family of smooth functions that approximate the usual exact penalty function. At each iteration in the algorithm we only need to find a stationary point of the smooth penalty function, so the difficulty of computing the global solution can be avoided. Under a generalized Mangasarian-Fromovitz constraint qualification condition (GMFCQ) that is weaker and more comprehensive than the traditional MFCQ, we prove that the sequence generated by this algorithm will enter the feasible solution set of the primal problem after finite times of iteration, and if the sequence of iteration points has an accumulation point, then it must be a Karush-Kuhn-Tucker (KKT) point. Furthermore, we obtain better convergence for convex optimization problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.