Efficiency of evolutionary algorithms may be increased using multi-objectivization. Multi-objectivization is performed by adding some auxiliary objectives. We consider selection of these objectives during a run of an evolutionary algorithm.One of the selection methods is based on reinforcement learning. There are several types of rewards previously used in reinforcement learning for adjusting of evolutionary algorithms. However, there is no superior reward. At the same time, reinforcement learning itself may be enhanced by multi-objectivization. So we propose a method for selection of auxiliary objectives based on multi-objective reinforcement learning, where the reward is composed of the previously used single rewards. Hence, we have double multiobjectivization: several rewards are involved in selection of several auxiliary objectives.We run the proposed method on different benchmark problems and compare it with a conventional evolutionary algorithm and a method based on single-objective reinforcement learning. Multi-objective reinforcement shows competitive behavior and is especially useful in the case when we do not know in advance which of the single rewards is efficient.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.