Classical estimation of distribution algorithms (EDAs) generally use truncation selection to estimate the distribution of the good individuals while ignoring the bad ones. However, various researches in evolutionary algorithms (EAs) have reported that the bad individuals may affect and help solving the problem. This paper proposes a new method to use the bad individuals by studying the substructures rather than the entire individual structures to solve reinforcement learning (RL) problems, which generally factorize their entire solutions to the sequences of state–action pairs. This work was studied in a recent graph‐based EDA named probabilistic model building genetic network programming (PMBGNP), which could solve RL problems successfully, to propose an extended PMBGNP. The effectiveness of this work is verified in an RL problem, namely robot control. Compared to other related work, results show that the proposed method can significantly speed up the evolution efficiency. © 2013 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.