The high computational complexity required for performing an exact\ud
schedulability analysis of fixed priority systems has led the\ud
research community to investigate new feasibility tests which are\ud
less complex than exact tests, but still provide a reasonable\ud
performance in terms of acceptance ratio. The performance of a test\ud
is typically evaluated by generating a huge number of synthetic task\ud
sets and then computing the fraction of those that pass the test\ud
with respect to the total number of feasible ones. The resulting\ud
ratio, however, depends on the metrics used for evaluating the\ud
performance and on the method for generating random task parameters.\ud
In particular, an important factor that affects the overall result\ud
of the simulation is the probability density function of the random\ud
variables used to generate the task set parameters.\ud
\ud
In this paper we discuss and compare three different metrics that\ud
can be used for evaluating the performance of schedulability tests.\ud
Then, we investigate how the random generation procedure can bias\ud
the simulation results of some specific scheduling algorithm.\ud
Finally, we present an efficient method for generating task sets\ud
with uniform distribution in a given space, and show how some\ud
intuitive solutions typically used for task set generation can bias\ud
the simulation results
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.