1998
DOI: 10.1109/4235.728207
|View full text |Cite
|
Sign up to set email alerts
|

Evolutionary algorithms and gradient search: similarities and differences

Abstract: Classical gradient methods and evolutionary algorithms represent two very different classes of optimization techniques that seem to have very different properties. This paper discusses some aspects of some "obvious" differences and explores to what extent a hybrid method, the evolutionary-gradient-search procedure, can be used beneficially in the field of continuous parameter optimization. Simulation experiments show that on some test functions, the hybrid method yields faster convergence than pure evolution s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
92
0
2

Year Published

2001
2001
2014
2014

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 248 publications
(94 citation statements)
references
References 13 publications
0
92
0
2
Order By: Relevance
“…In addition to the aforementioned advantage of information exchange between solutions, there are other major differences between GA and most current optimization methods [Goldberg, 1989] which justify the attempts presented in this paper: GA work with the coding of parameters not with parameters themselves (incorporating and exploiting important parameters specific properties); GA search from an initial population of individuals not from a single individual (a set of possible acceptable solutions can be obtained simultaneously); GA only use payoff (objective function) information and not derivatives, often troublesomely calculated, or other auxiliary information (data economy); GA use probabilistic transition rules between generations (i.e., iterative steps of convergence) not deterministic ones (deterministic local directions can be avoided); and GA can deal with any form of objective function that seems to best suit the problem. A more extensive review of similarities and differences of GA and gradient techniques is given by, e.g., Salomon [1998].…”
Section: Multipopulation Genetic Algorithmmentioning
confidence: 99%
“…In addition to the aforementioned advantage of information exchange between solutions, there are other major differences between GA and most current optimization methods [Goldberg, 1989] which justify the attempts presented in this paper: GA work with the coding of parameters not with parameters themselves (incorporating and exploiting important parameters specific properties); GA search from an initial population of individuals not from a single individual (a set of possible acceptable solutions can be obtained simultaneously); GA only use payoff (objective function) information and not derivatives, often troublesomely calculated, or other auxiliary information (data economy); GA use probabilistic transition rules between generations (i.e., iterative steps of convergence) not deterministic ones (deterministic local directions can be avoided); and GA can deal with any form of objective function that seems to best suit the problem. A more extensive review of similarities and differences of GA and gradient techniques is given by, e.g., Salomon [1998].…”
Section: Multipopulation Genetic Algorithmmentioning
confidence: 99%
“…'stochastic ranking' and presented a fresh view on penalty function methods in terms of dominance of penalty and objective functions. Salomon (1998) presented a new hybrid approach, the evolutionary-gradient-search method for the problems pertaining to numerical optimization. A much broader collection of developments in evolutionary computation for manufacturing optimization can be found in Dimopoulos and Zalzala (2000).…”
Section: Binary Stringmentioning
confidence: 99%
“…In the present scenario, evolutionary algorithms (EAs) are not limited to the applications in artificial intelligence, and have been increasingly used in various dimensions to solve real world problems (Dimopoulos and Zalzala, 2000;Sinha et al, 2003). Although, a plethora of literature pertaining to the search strategies adopted by various EAs is available (Nissen and Propach, 1998;Rana et al, 1996;Kazarlis et al, 2001;Yao et al, 1999;Salomon, 1998;Choi and Oh, 2000;Yoon and Moon, 2002;Kim and Myung, 1997;Storn, 1999), most of them restrict themselves to solving a particular class of optimization problem and thus, fail to provide generalized strategies that can be robustly used for wide spectrum of optimization problems in science, business and engineering applications. In general, optimization problems can be classified into two groupsnumerical optimization and combinatorial optimization (Tsai et al, 2004;Gen and Cheng, 1999).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations