Train operation strategy optimization is a multi-objective optimization problem affected by multiple conditions and parameters, and it is difficult to solve it by using general optimization methods. In this paper, the parallel structure and double-population strategy are used to improve the general optimization algorithm. One population evolves by genetic algorithm (GA), and the other population evolves by particle swarm optimization (PSO). In order to make these two populations complement each other, an immigrant strategy is proposed, which can give full play to the overall advantages of parallel structure. In addition, GA and PSO is also improved, respectively. For GA, its convergence speed is improved by adjusting the selection pressure adaptively based on the current iteration number. Elite retention strategy (ERS) is introduced into GA, so that the best individual in each iteration can be saved and enter the next iteration process. In addition, the opposition-based learning (OBL) can produce the opposition population to maintain the diversity of the population and avoid the algorithm falling into local convergence as much as possible. For PSO, linear decreasing inertia weight (LDIW) is presented to better balance the global search ability and local search ability. Both MATLAB simulation results and hardware-in-the-loop (HIL) simulation results show that the proposed double-population genetic particle swarm optimization (DP-GAPSO) algorithm can solve the train operation strategy optimization problem quickly and effectively.