2005
DOI: 10.1155/mpe.2005.165
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study on optimization methods for the constrained nonlinear programming problems

Abstract: Constrained nonlinear programming problems often arise in many engineering applications. The most well-known optimization methods for solving these problems are sequential quadratic programming methods and generalized reduced gradient methods. This study compares the performance of these methods with the genetic algorithms which gained popularity in recent years due to advantages in speed and robustness. We present a comparative study that is performed on fifteen test problems selected from the literature.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0
2

Year Published

2007
2007
2020
2020

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(24 citation statements)
references
References 18 publications
0
22
0
2
Order By: Relevance
“…Thus, if a solution set violates any of the implicit bound constraints, a penalty is applied to its fitness value thus degrading the quality of an infeasible solution. Penalty function methods are the most popular methods used for constrained optimization problems using a GA [14]. These methods transform a constrained problem into an unconstrained problem by imposing a penalty on the infeasible solution.…”
Section: Penalty Functionsmentioning
confidence: 99%
“…Thus, if a solution set violates any of the implicit bound constraints, a penalty is applied to its fitness value thus degrading the quality of an infeasible solution. Penalty function methods are the most popular methods used for constrained optimization problems using a GA [14]. These methods transform a constrained problem into an unconstrained problem by imposing a penalty on the infeasible solution.…”
Section: Penalty Functionsmentioning
confidence: 99%
“…The related algorithms are classified into two classes of definite and probable forms [1]. Definite algorithms are local searching methods based on gradient which needs basic movement information for finding possible solutions [2].…”
Section: Introductionmentioning
confidence: 99%
“…The GRG algorithm transforms inequality constraints into equality constraints by introducing slack variables. It is a very reliable and robust algorithm [15][16][17][18][19][20].…”
Section: Introductionmentioning
confidence: 99%