Currently, real-time embedded systems evolve towards complex systems using new state of the art technologies such as multi-core processors and virtualization techniques. Both technologies require new real-time scheduling algorithms. For uniprocessor scheduling, utilization-based evaluation methodologies are well-established. For multi-core systems and virtualization, evaluating and comparing scheduling techniques using the tasks' parameters is more realistic. Evaluating such scheduling techniques requires relevant and standardised task sets. Scheduling algorithms can be evaluated at three levels: 1) using a mathematical model of the algorithm, 2) simulating the algorithm and 3) implementing the algorithm on the target platform. Generating task sets is straightforward in the case of the first two levels; only the parameters of the tasks are required. Evaluating and comparing scheduling algorithms on the target platform itself, however, requires executable tasks matching the predefined standardised task sets. Generating those executable tasks is not standardized yet.
Therefore, we developed a task-set generator that generates reproducible, standardised task sets that are suitable at all levels. Besides generating the tasks' parameters, it includes a method that generates executables by combining publicly available benchmarks with known execution times. This paper presents and evaluates this task-set generator.