V irtualization has become an important tool in computer system design, and virtual machines are used in a number of subdisciplines ranging from operating systems to programming languages to processor architectures. By freeing developers and users from traditional interface and resource constraints, VMs enhance software interoperability, system impregnability, and platform versatility.Because VMs are the product of diverse groups with different goals, however, there has been relatively little unification of VM concepts. Consequently, it is useful to take a step back, consider the variety of VM architectures, and describe them in a unified way, putting both the notion of virtualization and the types of VMs in perspective. ABSTRACTION AND VIRTUALIZATIONDespite their incredible complexity, computer systems exist and continue to evolve because they are designed as hierarchies with well-defined interfaces that separate levels of abstraction. Using welldefined interfaces facilitates independent subsystem development by both hardware and software design teams. The simplifying abstractions hide lower-level implementation details, thereby reducing the complexity of the design process. Figure 1a shows an example of abstraction applied to disk storage. The operating system abstracts hard-disk addressing details-for example, that it is comprised of sectors and tracks-so that the disk appears to application software as a set of variable-sized files. Application programmers can then create, write, and read files without knowing the hard disk's construction and physical organization.A computer's instruction set architecture (ISA) clearly exemplifies the advantages of well-defined interfaces. Well-defined interfaces permit development of interacting computer subsystems not only in different organizations but also at different times, sometimes years apart. For example, Intel and AMD designers develop microprocessors that implement the Intel IA-32 (x86) instruction set, while Microsoft developers write software that is compiled to the same instruction set. Because both groups satisfy the ISA specification, the software can be expected to execute correctly on any PC built with an IA-32 microprocessor.Unfortunately, well-defined interfaces also have their limitations. Subsystems and components designed to specifications for one interface will not work with those designed for another. For example, application programs, when distributed as compiled binaries, are tied to a specific ISA and depend on a specific operating system interface. This lack of interoperability can be confining, especially in a world of networked computers where it is advantageous to move software as freely as data.Virtualization provides a way of getting around such constraints. Virtualizing a system or component-such as a processor, memory, or an I/O device-at a given abstraction level maps its interface and visible resources onto the interface and resources of an underlying, possibly different, real system. Consequently, the real system appears as a different...
Many studies point to the difficulty of scaling existing computer architectures to meet the needs of an exascale system (i.e., capable of executing 10 18 floating-point operations per second), consuming no more than 20 MW in power, by around the year 2020. This paper outlines a new architecture, the Active Memory Cube, which reduces the energy of computation significantly by performing computation in the memory module, rather than moving data through large memory hierarchies to the processor core. The architecture leverages a commercially demonstrated 3D memory stack called the Hybrid Memory Cube, placing sophisticated computational elements on the logic layer below its stack of dynamic random-access memory (DRAM) dies. The paper also describes an Active Memory Cube tuned to the requirements of a scientific exascale system. The computational elements have a vector architecture and are capable of performing a comprehensive set of floating-point and integer instructions, predicated operations, and gather-scatter accesses across memory in the Cube. The paper outlines the software infrastructure used to develop applications and to evaluate the architecture, and describes results of experiments on application kernels, along with performance and power projections.
In this paper we present methods for generating bounds on interconnection delays in a combinational network having specified timing requirements at its input and output terminals. An automatic placement program which uses wirability as its primary objective could use these delay bounds to generate length or capacitance bounds for interconnection nets as secondary objectives. Thus, unlike previous timing-driven placement algorithms described in the literature, the desired performance of the circuit is guaranteed when a wirable placement meeting these objectives is found. We also provide fast algorithms which maximize the delay range, and hence the margin for error in layout, for various types of timing constraints. * * Ellen J. Yoffa (M'86) received the B.S. and Ph.D. degrees in physics from the Massachusetts Institute of Technology, where her area of study was theoretical solid-state physics. In 1978, she joined the IBM Thomas J. Watson Research Center for postdoctoral work in the Semiconductor Science and Technology Depanment, where she investigated ballistic conduction in semiconductor devices and the physics of the laser annealing process. Since 1980, she has been a member of the research staff in the Computer Science Department and is currently senior manager of the System Design and Verification Department, where she is responsible for managing research in tools for computer-aided design of VLSI chips, including system description and early design tools, logic synthesis, and hardware and software logic simulation. She is a member of the editorial board of lEEE Design and Tesf of Computers. Her research has involved the development of tools for VLSI physical design automation. Dr. Yoffa is a member of Phi Beta Kappa and Sigma Xi.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.