Many real-world applications require the optimization of multiple conflicting criteria. For example, in robot locomotion, we are interested is maximizing speed while minimizing energy consumption. Multi-objective Bayesian optimization (MOBO) methods, such as ParEGO [6], ExI [5] and SMS-EGO [8] make use of models to define the next experiment, i.e., select the next parameters for which the objective function is to be evaluated. However, suggesting the next experiment is typically the only use of models in MOBO. In this paper, we propose to further exploit these models to improve the estimation of the final Pareto front and ultimately provide useful tool to the user for further analysis. We demonstrate that a small philosophical difference leads to substantial advantages in the practicality of most MOBO methods "for free".Optimization often requires the definition of a single objective function to be optimized. However, many real-world applications naturally present multiple criteria to be optimized. For example, in complex robotic systems, we must consider performance criteria such as motion accuracy, speed, robustness to noise or energy-efficiency [9]. Typically, it is impossible to optimize all these desiderata at the same time as they may be conflicting. However, it is still desirable to find a trade-off that satisfies as much as possible the different criteria and the necessities of the final user.For practical purposes, the existence of multiple criteria is often side-stepped by designing a single objective that incorporates all criteria, e.g., by defining a weighted sum of the criteria [3]. Alternatively, the optimization of these different objectives can be formalized as a Multi-Objective Optimization (MOO) [2]. In MOO, the goal is to return a Pareto front (PF), which represents the best trade-off possible between the different criteria [7]. From this Pareto front, it is the responsibility of the user to ultimately select the most convenient/promising set of parameters to apply. Intuitively, the goodness of the returned PF can be measured by the accuracy (how close is the proposed PF to the real unknown optimal PF), by the size (having a large set of solutions in the PF is desirable) and by the diversity (having solutions that encompass a wide range of trade-offs).Many model-based MOO methods exist, which extend Bayesian optimization to the multi-objective case, such as ParEGO [6], ExI [5] and SMS-EGO [8]. We here define all these methods as Multi Objective Bayesian Optimization (MOBO) methods. Currently, the main advantage of MOBO methods, compared to model-free MOO methods [4,12], is to reduce the number of experiments/ evaluations of the objective function. However, the models of the objective functions are used exclusively to select the next parameters to evaluate.In this paper, we present a new perspective on MOBO and on the use of models in MOO. We demonstrate that it is possible to exploit the learned models, e.g., for better approximations of the Pareto front or for computing additional s...