The Multi-core Multi-threading Microprocessor introduces not only resource sharing to threads in the same core, e.g., computation resources and private caches, but also isolates those resources within different cores. Moreover, when the Simultaneous Multithreading architecture is employed, the execution resources are fully shared among the concurrently executing threads in the same core, while the isolation is worsened as the number of cores increases. Even though fetch policies regarding how to assign priorities in fetch stage are well designed to manage the shared resources in a core, it is actually the scheduling policy that makes the distributed resources available for workloads, through deciding how to send their threads to cores. On the other hand, threads consume various resources in different phases and Cycles Per Instruction Spent on Memory (CPImem) is used to express their resource demands. Consequently, aiming at better performance via scheduling according to their resource demands, we propose the Mix-Scheduling to evenly mix threads across cores, so that it achieves thread diversity, i.e., CPImem diversity in every core. As a result, it is observed in our experiment that 63% improvement in overall system throughput and 27% improvement in average thread performance, when comparing the Mix-Scheduling policy with the reference policy MonoScheduling, which keeps CPImem uniformity among threads in every core on chips. Furthermore, the Mix-Scheduling also makes an essential step towards shortening load latency, because it succeeds in reducing the L2 Cache Miss Rate by 6% from MonoScheduling.
Complexity in resource allocation grows dramatically as multiple cores and threads are implemented on Multicore Multithreaded Microprocessors (MMMP). Such complexity is escalated with variations in workload behaviors. In an effort to support a dynamic, adaptive and scalable operating system (OS) scheduling policy for MMMP, architectural strategies are proposed to construct linear models to capture workload behaviors and then schedule threads according to their resource demands. This paper describes the design through three steps: in the first step we convert a static scheduling policy into a dynamic one, which evaluates the thread mapping pattern at runtime. In the second step we employ regression models to ensure that the scheduling policy is capable of responding to the changing behaviors of threads during execution. In the final step we limit the overhead of the proposed policy by adopting a heuristic approach, thus ensure the scalability with the exponential growth of core and thread counts. The experimental results validate our proposed model in terms of throughput, adaptability and scalability. Compared with the baseline static approach, our phase-triggered scheduling policy could achieve up to 29% speedup. We also provide detailed tradeoff study between performance and overhead that system architects can reference to when target systems and specific overheads are presented.
Traditional temporal logic regards protocols as close system to analyze. In order to overcome the shortcoming of traditional temporal logic, an game-based analysis method is inducted. This method is applied to formal analyze a fair contract signing protocol, and then some defects of the protocol are found. An improvement protocol is proposed which fixes the flaw by adding some extra time limit information and an abort subprotocol. The fairness and the timeliness of the improvement protocol are validated by ATL formula and Invariant Checking. It is found that the improvement protocol satisfies the timeliness and the fairness.
The Simultaneous Multithreading (SMT) architecture improves the resource efficiency via scheduling and executing concurrent threads in the same core. Moreover, fetch policies are proposed to assign priorities in the fetch stage to manage the shared resources. However, power consumption study is omitted in most fetch policies. On the other hand, the power management schemes nowadays are focused on multicore processors. Given the growing demands to manage the power consumption of processors and the fully shared system resources in SMT environment, it requires detailed research to develop the power management in an SMT processor.This paper proposes a power aware fetch policy PCOUNT, which evaluates the power consumption for two categories in SMT: computation resources and memory accessing resources. PCOUNT fetches from the thread with lowest evaluated power consumption in every CPU cycle, in order to reduce overall power consumption. Furthermore, this paper justifies studied fetch polices using power efficiency, which is calculated as evaluated power consumption per unit system throughput. As a result, PCOUNT improves power efficiency over ICOUNT by 26% and over DWarn by 31% on average. Meanwhile, PCOUNT is able to achieve better overall system throughput and average thread improvement than ICOUNT and DWarn.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.