Abstract. For genetic algorithms, new variants of the uniform crossover operator that introduce selective pressure on the recombination stage are proposed. Operator probabilistic rates based approach to genetic algorithms selfconfiguration is suggested. The usefulness of the proposed modifications is demonstrated on benchmark tests and real world problems.Keywords: genetic algorithms, uniform crossover, selective pressure recombination, self-configuration, performance comparison.
IntroductionEvolutionary algorithms (EA), the best known representatives of which are genetic algorithms (GA), are well known optimization techniques based on the principles of natural evolution. Although GAs have been successful in solving many of real world optimization problems, their performance depends on the selection of the GA settings and tuning their parameters. GAs usually use a bit-string solution representation, but other decisions have to be made before the algorithms execution. The design of a GA consists of choosing of variation operators (e.g. recombination and mutation) that will be used to generate new solutions from the current population and the parent selection operator (to decide which members of the population are to be used as inputs to the variation operators), as well as a survival scheme (to decide how the next generation is to be created from the current one and outputs of the variation operators). Additionally, real valued parameters of the chosen settings (the probability of recombination, the level of mutation, etc.) have to be tuned. The process of setting choice and parameter tuning is known as a time-consuming and complicated task. Much research has tried to deal with this problem. Some approaches attempted to determine appropriated setting by experimenting over a set of well-defined functions or through theoretical analysis. Another approach, usually applying terms such as "self-adaptation" or "self-tuning", tries to eliminate the setting process by adapting settings through the algorithm execution.There exist much research devoted to "self-adapted" or "self-tuned" GAs and authors of corresponding papers determine similar ideas in very different ways, all of them aimed at reducing the human expert role in algorithms designing.
In the modern digital economy, optimal decision support systems, as well as machine learning systems, are becoming an integral part of production processes. Artificial neural network training as well as other engineering problems generate such problems of high dimension that are difficult to solve with traditional gradient or conjugate gradient methods. Relaxation subgradient minimization methods (RSMMs) construct a descent direction that forms an obtuse angle with all subgradients of the current minimum neighborhood, which reduces to the problem of solving systems of inequalities. Having formalized the model and taking into account the specific features of subgradient sets, we reduced the problem of solving a system of inequalities to an approximation problem and obtained an efficient rapidly converging iterative learning algorithm for finding the direction of descent, conceptually similar to the iterative least squares method. The new algorithm is theoretically substantiated, and an estimate of its convergence rate is obtained depending on the parameters of the subgradient set. On this basis, we have developed and substantiated a new RSMM, which has the properties of the conjugate gradient method on quadratic functions. We have developed a practically realizable version of the minimization algorithm that uses a rough one-dimensional search. A computational experiment on complex functions in a space of high dimension confirms the effectiveness of the proposed algorithm. In the problems of training neural network models, where it is required to remove insignificant variables or neurons using methods such as the Tibshirani LASSO, our new algorithm outperforms known methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.