It is challenging to simultaneously achieve multicore scalability and high disk throughput in a file system. For example, even for commutative operations like creating different files in the same directory, current file systems introduce cache-line conflicts when updating an in-memory copy of the on-disk directory block, which limits scalability. ScaleFS is a novel file system design that decouples the in-memory file system from the on-disk file system using per-core operation logs. This design facilitates the use of highly concurrent data structures for the in-memory representation, which allows commutative operations to proceed without cache conflicts and hence scale perfectly. ScaleFS logs operations in a per-core log so that it can delay propagating updates to the disk representation (and the cache-line conflicts involved in doing so) until an fsync. The fsync call merges the per-core logs and applies the operations to disk. ScaleFS uses several techniques to perform the merge correctly while achieving good performance: timestamped linearization points to order updates without introducing cache-line conflicts, absorption of logged operations, and dependency tracking across operations. Experiments with a prototype of ScaleFS show that its implementation has no cache conflicts for 99% of test cases of commutative operations generated by Commuter, scales well on an 80-core machine, and provides on-disk performance that is comparable to that of Linux ext4. † Now at VMware. ‡ Now at Apple. § Now at Google.
With the growing importance of the cloud computing paradigm, it is a challenge for cloud providers to keep the operational costs of the data centers in check, especially in the emerging markets, alongside catering to the customers' needs. It becomes essential to increase the operational efficiency of the data centers to be able to maximize VM (Virtual machine) offerings at minimal cost. To that end, energy-efficiency of the servers plays a critical role, as they influence the electrical and the cooling costs which constitute a major part of the total cost involved in the operation of a data center.Power-savings can be achieved at several different levels in a system: processors, memory, devices, and system-wide (involving powering down multiple components of a host all at once). At the processors level, depending on the workload trends, we can exploit technologies like DVFS (Dynamic Voltage and Frequency Scaling) or P-states when the CPU is running, and CPU sleep states (C-states) when the CPU is idle, to save power. Memory standards such as DDR3 have provisions for putting idle memory banks into low-power states. At the devices level, individual devices can be put into low-power states, controlled and coordinated by a run-time power management framework in the Operating System.This paper outlines the state-of-the-art in power-management technology on server hardware and describes how these raw features can be abstracted into a set of energy policies. We then explain how these policies or energy-profiles can be used to run a cloud datacener energy efficiently. Further, this paper also highlights some of the challenges involved in running cloud infrastructures in the emerging markets optimally despite some unique energy constraints.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.