Cache memories have been introduced into embedded systems to prevent memory access times from becoming an unacceptable performance bottleneck. For hard real-time systems, it is vital that an accurate estimate of the worst-case response time for each task can be determined. Memory and cache are split into blocks containing instructions and data. During a pre-emption, blocks from the pre-empting task can evict those of the pre-empted task. When the pre-empted task is resumed, if it then has to re-load the evicited blocks, cache related pre-emption delays (CRPD) are introduced which then affect the worst-case response times of the task. Because the position of code in memory determines where the code will be placed in cache, different layouts result in different CRPD and worst-case response times for tasks. We introduce an approach that uses simulated annealing to find layouts that minimise the CRPD incurred due to a pre-emption. This in turn reduces the worst-case response times of tasks, which increases the schedulability of the taskset. We use schedulability analysis that captures whether a block will have to be re-loaded after a pre-emption, to drive the algorithm towards a near optimal solution. After explaining our approach, we present a number of experiments which demonstrate its effectiveness for a number of different system, task and cache configurations.
In hard real-time systems, cache partitioning is often suggested as a means of increasing the predictability of caches in pre-emptively scheduled systems: when a task is assigned its own cache partition, inter-task cache eviction is avoided, and timing verification is reduced to the standard worst-case execution time analysis used in nonpre-emptive systems. The downside of cache partitioning is the potential increase in execution times. In this paper, we evaluate cache partitioning for hard real-time systems in terms of overall schedulability. To this end, we examine the sensitivity of (i) task execution times and (ii) pre-emption costs to the size of the cache partition allocated and present a cache partitioning algorithm that is optimal with respect to taskset schedulability. We also devise an alternative algorithm which primarily optimises schedulability but also minimises processor utilization. We evaluate the performance of cache partitioning compared to state-of-the-art pre-emption cost analysis based on benchmark code and on a large number of synthetic tasksets with both fixed priority and EDF scheduling. This allows us to derive general conclusions about the usability of cache partitioning and identify taskset and system parameters that influence the relative effectiveness of cache partitioning. We also examine the improvement in processor -The evaluation now covers both fixed priority and EDF scheduling.-We examined how the schedulability of a group of tasks sharing a partition depends upon partition size. -We present an alternative cache partitioning algorithm which both optimises schedulability and minimises processor utilization. We examine the improvement in processor utilization obtained using this algorithm as compared to the original cache partitioning algorithm, and the tradeoff in terms of increased analysis time.
Hierarchical scheduling provides a means of composing multiple real-time applications onto a single processor such that the temporal requirements of each application are met. This has become a popular technique in industry as it allows applications from multiple vendors as well as legacy applications to co-exist in isolation on the same platform. However, performance enhancing features such as caches mean that one application can interfere with another by evicting blocks from cache that were in use by another application, violating the requirement of temporal isolation. While one solution is to flush the cache after every application context switch, this can potentially lead to a degradation in performance. In this paper, we present analysis that bounds the additional delay due to blocks being evicted from cache by other applications in a system using hierarchical scheduling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.