Embedded systems, like general-purpose systems, can benefit from parallel execution on a symmetric multicore platform. Unfortunately, concurrency issues present in general-purpose programming also apply to embedded systems, protection from which is currently only offered with performance-limiting coarse-grained locking or error-prone and difficult-to-implement fine-grained locking. Transactional memory offers relief from these mechanisms, but has primarily been investigated on general-purpose systems. In this paper, we present Embedded Software Transactional Memory (ESTM) as a novel solution to the concurrency problem in parallel embedded applications. We investigate common software transactional memory design decisions and discuss the best decisions for an embedded platform. We offer a full implementation of an embedded STM and test it against both coarse-grained and fine-grained locking mechanisms. We find that we can meet or beat the performance of fine-grained locking over a range of application characteristics, including size of shared data, time spent in the critical section, and contention between threads. Our ESTM implementation benefits from the effective use of L1 memory, a feature which is built into our STM model but which cannot be directly utilized by traditional locking mechanisms.
As memory speeds and bus capacitances continue to rise, external memory bus power will make up an increasing portion of the total system power budget for system-on-a-chip embedded systems. Both hardware and software approaches can be explored to balance the power/performance tradeoff associated with the external memory.In this paper we present a hardware-based, programmable external memory page remapping mechanism which can significantly improve performance and decrease the power budget due to external memory bus accesses. Our approach was developed by studying common data access patterns present in embedded multimedia applications. In the paper, we evaluate a mechanism that can perform page remapping of external memory. We also develop an efficient algorithm to map application data and instruction memory into external memory pages. We employ graph-coloring techniques to guide the page mapping procedure. The objective is to avoid page misses by remapping conflicting pages to different memory banks (i.e., by assigning them different colors). Our algorithm can significantly reduce the memory page miss rate by 70-80% on average. For a 4-bank SDRAM memory system, we reduced external memory access time by 12.6% . The proposed algorithm can reduce power consumption in majority of the benchmarks, averaged by 13.2% of power reduction. Combining the effects of both power and delay, our algorithm can benefit significantly to the total energy cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.