Multiobjective genetic algorithms (MOGAs) have proven to be powerful in solving multiobjective problems in the accelerator field. Nevertheless, for explorative problems that have many variables and local optima, the performance of MOGAs is not always satisfactory, especially when a small population size is used due to practical limitations, e.g., limited computing resources. To deal with this challenge, in this paper an enhanced MOGA, neural network-based MOGA (NBMOGA), is proposed. In this method, the data produced with the standard MOGA are used to train a neural network. The neural network is fast to produce a large pool of objective function estimates, with sufficiently high accuracy. A subset of the most competitive estimates is selected to form a population (matching MOGA population size), which is then evaluated with the MOGA evaluator. By taking three classic multiobjective problems as examples, we demonstrate that the proposed method promises a faster convergence and a higher degree of diversity than that available with the standard MOGA and other three optimization methods that have been applied in the accelerator field, i.e., the multiobjective particle swarm optimization (MOPSO), the combination of MOPSO and MOGA, and the clustering enhanced MOGA. And then this method is applied to a timeconsuming optimization problem, the dynamic aperture and Touschek lifetime optimization of the high energy photon source. It turns out that, within the same optimization time, a better set of solutions in the objective space can be obtained with the NBMOGA than using other methods. The Touschek lifetime can be improved by about 10% compared with using the standard MOGA, with approximately the same dynamic aperture area. Besides, a higher degree of diversity among solutions is observed with the NBMOGA than using other tested methods.