In recent decades, multi-objective evolutionary algorithms (MOEAs) have been evaluated on artificial test problems with unrealistic characteristics, leading to uncertain conclusions about their efficacy in real-world applications. To address this issue, a few benchmark test suites comprising real-world problems have been proposed for MOEAs, encompassing numerous multi-objective problems and a select few many-objective problems. Given the distinct challenges posed by many-objective optimization problems (MaOPs) and their inherent difficulty, it is crucial to develop a test suite that includes real-world problems with many conflicting objectives. Hence, in this paper, we propose a comprehensive test suite for benchmarking the many-objective real-world complex problems. This test suite consists of 11 problems collected from different disciplines of engineering. Furthermore, we comprehensively analyzed the problems in our newly proposed test suite, employing eight state-of-the-art algorithms rooted in various fundamental principles specifically designed to address MaOPs. The experimental findings highlight the strong performance of indicator-based, weightvector-based decomposition, Pareto-dominance-based, and hybrid MOEAs on the proposed test suite. In contrast, reference-vector-based decomposition approaches, Pareto front shape estimation-based methods, and multi-evolution approaches exhibit relatively weaker performance.