Virtualization techniques for embedded real-time systems, as known from the Integrated Modular Avionics (IMA) architecture of the ARINC653 standard, typically employ a TDMA scheduling to achieve temporal isolation among different virtualized partitions. Due to the fixed TDMA schedule, the worst case interrupt response times are significantly increased. An already proposed technique to mitigate this problem is to allow interrupts within an TDMA schedule, in order to achieve better interrupt response times while maintaining a sufficient degree of temporal independence via monitoring. In this paper we propose a novel approach that optimizes the TDMA schedule based on the partitions internal timing behavior and tasks parameters. The developed optimization algorithm generates a maximum amount of slack within the TDMA cycle. This slack is later used to interpose interrupts, while maintaining the interference with a monitor. We show correctness of the approach and evaluate it in a hypervisor implementation.
Virtualization techniques for embedded real-time systems typically employ TDMA scheduling to achieve temporal isolation among different virtualized applications. Recent work already introduced sporadic server based solutions relying on budgets instead of a fixed TDMA schedule. While providing better average-case response times for IRQs and tasks, a formal response time analysis for the worst-case is still missing. In order to confirm the advantage of a sporadic server based budget scheduling, this paper provides a worst-case response time analysis. To improve the sporadic server based budget scheduling even more, we provide a background scheduling implementation which will also be covered by the formal analysis. We show correctness of the analysis approach and compare it against TDMA based systems. In addition to that, we provide response time measurements from a working hypervisor implementation on an ARM based development board.
In automotive and industrial real-time software systems, the primary timing constraints relate to cause-effect chains. A cause-effect chain is a sequence of linked tasks and it typically implements the process of reading sensor data, computing algorithms, and driving actuators. The classic timing analysis computes the maximum end-to-end latency of a given cause-effect chain to verify that its end-to-end deadline can be satisfied in all cases. This information is useful but not sufficient in practice: Software is usually evolving and updates may always alter the maximum end-to-end latency. It would be desirable to judge the quality of a software design a priori by quantifying how robust the timing of a given cause-effect chain will be in the presence of software updates. In this paper, we derive robustness margins which guarantee that if software extensions stay within certain bounds, then the end-to-end deadline of a cause-effect chain can still be satisfied. Robustness margins are also useful to know if the system model has uncertain parameters. A robust system design can tolerate bounded deviations from the nominal system model without violating timing constraints. The results are applicable to both the bounded execution time programming model and the (system-level) logical execution time programming model. In this paper, we study both an industrial use case from the automotive industry and analyze synthetically generated experiments with our open-source tool TORO.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.