SUMMARYAutomatic Test Data Generation is a very popular domain in the field of Search Based Software Engineering. Traditionally, the main goal has been to maximize coverage. However, other objectives can be defined, like the oracle cost, which is the cost of executing the entire test suite and the cost of checking the system behaviour. Indeed, in very large software systems, the cost spent to test the system can be an issue, and then it makes sense considering two conflicting objectives: maximizing the coverage and minimizing the oracle cost. This is what we do in this paper. We mainly compare two approaches to deal with the Multi-Objective Test Data Generation Problem: a direct multi-objective approach and a combination of a mono-objective algorithm together with multi-objective test case selection optimization. Concretely, in this work we use four state-of-the-art multi-objective algorithms and two mono-objective evolutionary algorithms followed by a multi-objective test case selection based on Pareto efficiency. The experimental analysis compares these techniques on two different benchmarks. The first one is composed by 800 java programs created through a program generator. The second benchmark is composed by 13 real programs extracted from the literature. In the direct multi-objective approach, the results indicate that the oracle cost can be properly optimized; however the full branch coverage of the system poses a great challenge. Regarding the monoobjective algorithms, although they need a second phase of test case selection for reducing the oracle cost, they are very effective maximizing the branch coverage.