Evolutionary computation, e.g., particle swarm optimization (PSO), has made impressive achievements in solving complex problems in science and industry. As an important open problem for more than 50 years, there is still no theoretical guarantee on the global optimum and the general reliability, due to lack of a unified representation of diverse problem structures and a generic mechanism to avoid local optimums. The long-standing pitfalls severely impair their trusted applications in a variety of problems. Here, we report a new evolutionary computation framework aided by machine learning, named EVOLER, which for the first time enables the theoretically guaranteed global optimization of complex nonconvex problems. This is achieved by: (1) learning a low-rank representation of problem with the limited samples, which helps to identify one attention subspace; and (2) exploring this small attention subspace via the evolutionary computation method, which allows to reliably avoid local optimums. As validated on 20 challenging benchmarks, it finds the global optimum with probability approaching 1; and moreover, it attains the best results in all cases, thus substantially extending the applicability in diverse problems. We use EVOLER to tackle 2 important problems, dispatch of power grid and inverse design of nano-photonics devices, whereby it consistently gains the optimal results that were rarely achieved by state-of-the-art methods. Our method takes a crucial step forward in globally guaranteed evolutionary computation, overcoming the uncertainty of data-driven black-box methods, offering broad prospects for tackling complex real-world problems.