Abstract. This paper presents a comparative study of Evolutionary Algorithms (EAs) for Constraint Satisfaction Problems (CSPs). We focus on EAs where fitness is based on penalization of constraint violations and the penalties are adapted during the execution. Three different EAs based on this approach are implemented. For highly connected constraint networks, the results provide further empirical support to the theoretical prediction of the phase transition in binary CSPs.
IntroductionEvolutionary algorithms are usually considered to be ill-suited for solving constraint satisfaction problems. Namely, the traditional search operators (mutation and recombination) are 'blind' to the constraints, that is, parents satisfying a certain constraint may very well result in an offspring that violates it. Furthermore, while EAs have a 'basic instinct' to optimize, there is no objective function in a CSP -just a set of constraints to be satisfied. Despite such general arguments, in the last years there have been reports on quite a few EAs for solving CSPs having a satisfactory performance. Roughly speaking, these EAs can be divided into two categories: those based on exploiting heuristic information on the constraint network [6,14,21,22], and those using a fitness function (penalty function) that is adapted during the search [2,4,5,7,9,10,17,18]. In this paper we investigate three methods from the second category: the co-evolutionary method by Paredis [17], the heuristic-based microgenetic algorithm by Dozier et al [4], and the EA with stepwise adaptation of weights by Eiben et al. [10]. We implement three specific evolutionary algorithms based on the corresponding methods, called COE, SAW, and MID, respectively, and compare them on a test suite consisting of randomly generated binary CSPs with finite domains. The results of the experiments are used to assess empirically the relative performance of the three different methods within the same category, thereby providing suggestions as to which implementation of the same general idea is the most promising. We use randomly generated problem instances for the experiments, where the hardness of the problem instances is influenced by two parameters: constraint density and constraint tightness. By running experiments on 25 different combinations of these parameters we gain detailed feedback on EA behavior and can validate theoretical predictions on the location of the