One of the key challenges for high-density servers (e.g., blades) is the increased costs in addressing the power and heat density associated with compaction. Prior approaches have mainly focused on reducing the heat generated at the level of an individual server. In contrast, this work proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems ("ensemble"). Specifically, we discuss an implementation of this approach at the blade enclosure level to monitor and manage the power across the individual blades in a chassis. Our approach requires low-cost hardware modifications and relatively simple software support. We evaluate our architecture through both prototyping and simulation. For workloads representing 132 servers from nine different enterprise deployments, we show significant power budget reductions at performances comparable to conventional systems.
The ability to safely keep a secret in memory is central to the vast majority of security schemes, but storing and erasing these secrets is a difficult problem in the face of an attacker who can obtain unrestricted physical access to the underlying hardware. Depending on the memory technology, the very act of storing a 1 instead of a 0 can have physical side effects measurable even after the power has been cut. These effects cannot be hidden easily, and if the secret stored on chip is of sufficient value, an attacker may go to extraordinary means to learn even a few bits of that information. Solving this problem requires a new class of architectures that measurably increase the difficulty of physical analysis. In this paper we take a first step towards this goal by focusing on one of the backbones of any hardware system: on-chip memory. We examine the relationship between security, area, and efficiency in these architectures, and quantitatively examine the resulting systems through cryptographic analysis and microarchitectural impact. In the end, we are able to find an efficient scheme in which, even if an adversary is able to inspect the value of a stored bit with a probabilistic error of only 5%, our system will be able to prevent that adversary from learning any information about the original un-coded bits with 99.9999999999% probability.
Often times the process and effort in building interoperable simulations and applications can be arduous. Invariably the difficulty is in understanding what is intended. This paper introduces the notion of composable bridges as a means to help transition abstract ideas or concepts into concrete implementations.We examine the key elements to achieve composability, which includes the direction provided by a process, the importance of a conceptual model, the use of patterns to help characterize reusable aspects of a design, the importance of having good discovery metadata and well-defined interfaces that can be implemented, the use of components, and the practical use of libraries and tools. We suggest that of all these elements a properly documented conceptual model provides the basis for formulating a composable bridge, and that things like patterns, discovery metadata, and interfaces play a key role. We take a look at specific standard known as the Base Object Model (BOM) and examine how it provides a means to define a composable bridge. We explore how BOMs, in this capacity, can be aggregated and used (and reused) to support the creation of concrete implementations. We also explore how such composability helps to achieve various levels of interoperability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.