Multi-objective optimization problems are often subject to the presence of objectives that require expensive resampling for their computation. This is the case of many robustness metrics, frequently used as an additional objective, that accounts for the reliability of specific sections of the solution space. Typical robustness measurements use resampling, but the number of samples that constitute a precise dispersion measure has a potentially large impact on the computational cost of an algorithm. This paper proposes the integration of dominance based statistical testing methods as part of the selection mechanism of MOEAs with the aim of reducing the amount of fitness evaluations. The performance of the approach is tested on five classical benchmark functions integrating it into two well-known algorithms NSGA-II and SPEA2. The experimental results show a significant reduction in the number of fitness evaluations while, at the same time, maintaining the quality of the solutions.