We present a methodology for the design and analysis of power grids in the PowerPC™ microprocessors. The methodology covers the need for power grid analysis across all stages of the design process. A case study showing the application of this methodology to the PowerPC™ 750 microprocessor is presented.Keywords power distribution network, PowerPC™, IR-drop, reliability OverviewA robust power distribution network is vital in meeting performance guarantees and ensuring reliable operation of high performance microprocessors. Higher device densities and faster switching frequencies cause large switching currents to flow in the power and ground networks which degrade performance and reliability. Excessive voltage drops in the power grid reduce switching speeds and noise margins of circuits, and inject noise which might lead to functional failures. High average current densities lead to undesirable wearing out of metal wires due to electromigration [1]. Therefore, the challenge in the design of a power distribution network is in achieving excellent voltage regulation at the consumption points notwithstanding the wide fluctuations in power demand across the chip, and to build such a network using minimum area of the metal layers. These issues are prominent in high performance PowerPC™ processors as large amounts of power have to be distributed through a hierarchy of five or six metal layers. For example, in the PowerPC™ 750 processor, the average power dissipation for a nominal V dd of 2.5V is 5W.The crux of the problem in designing a power grid is that there are many unknowns until the very end of the design cycle. Nevertheless, decisions about the structure, size and layout of the power grid have to be made at very early stages when a large part of the chip design has not even begun. Unfortunately, most commercial tools focus on post-layout verification of the power grid when the entire chip design is complete and detailed information about the parasitics of the power and ground lines and the currents drawn by the transistors are known. Power grid problems revealed at this stage are usually very difficult or expensive to fix. The methodology described in this paper is centered around an initial power grid which is refined progressively as the chip design progresses. Paramount to such a methodology is an analysis tool that has a very wide choice of models for specifying the grid and the power consumption behavior of the chip. The models must have a range of accuracies that make them suitable for use at any stage of the overall chip design. Such an approach is necessary to identify and remedy any problems with the power grid early on so that final verification results in only minor fixes.A critical issue in the analysis of power grids is the large size of the network (typically millions of nodes in a state-of-the-art microprocessor). Simulating all the non-linear devices in the chip together with the non-ideal power grid is computationally infeasible. To make the size manageable, the simulation is done in two steps. Firs...
Careful design and verification of the power distribution network of a chip are of critical importance to ensure its reliable performance. With the increasing number of transistors on a chip, the size of the power network has grown so large as to make the verification task very challenging. The available computational power and memory resources impose limitations on the size of networks that can be analyzed using currently known techniques. Many of today's designs have power networks that are too large to be analyzed in the traditional way as flat networks. In this paper, we propose a hierarchical analysis technique to overcome the aforesaid capacity limitation. We present a new technique for analyzing a power grid using macromodels that are created for a set of partitions of the grid. Efficient numerical techniques for the computation and sparsification of the port admittance matrices of the macromodels are presented. A novel sparsification technique using a 0-1 integer linear programming formulation is proposed to achieve superior sparsification for a specified error. The run-time and memory efficiency of the proposed method are illustrated through the analysis of case studies of several multi-million node power grids, extracted from real microprocessor and DSP designs.With the increase in the complexity of VLSI chips, designing and analyzing a power distribution network has become a challenging task. A robust power network design is essential to ensure that the circuits on a chip operate reliably at the guaranteed level of performance. A poorly designed power network can become the cause for a variety of problems such as loss of circuit performance, noise generation, and electro-migration failures. With the increased power level and device densities of microprocessors in sub-micron technologies, these problems are more likely unless serious attention is given to power network design. Critical to obtaining a robust design is the ability to analyze the network efficiently several times in the design cycle. Several previously published research works [1-4] discuss methodologies and techniques to accomplish this task efficiently.The difficulty in power network analysis stems mainly from three sources: (i) the network is very large, typically 1 million to 100 million nodes, (ii) the network is nonlinear as it contains switching devices, and (iii) the voltage and current distribution in the network is dependent on the instruction executed on the processor. Our work, presented in this paper, addresses the first problem. The second problem is circumvented traditionally [1] by performing nonlinear simulation of individual circuit blocks without including the parasitics in the power interconnects, and then simulating the power interconnect as a whole using the time-variant current profiles, obtained in the nonlinear simulation as the excitation sources. The third problem is one of obtaining a good coverage of all possible worst case power demand situations. Manually generated "hot loops," an extensive set of input vecto...
Process variation has become a significant concern for static timing analysis. In this paper, we present a new method for path-based statistical timing analysis. We first propose a method for modeling inter-and intra-die device length variations. Based on this model, we then present an efficient method for computing the total path delay probability distribution using a combination of device length enumeration for inter-die variation and an analytical approach for intra-die variation. We also propose a simple and effective model of spatial correlation of intra-die device length variation. The analysis is then extended to include spatial correlation. We test the proposed methods on paths from an industrial high-performance microprocessor and present comparisons with traditional path analysis which does not distinguish between inter-and intradie variations. The characteristics of the device length distributions were obtained from measured data of 8 test chips with a total of 17688 device length measurements. Spatial correlation data was also obtained from these measurements. We demonstrate the accuracy of the proposed approach by comparing our results with Monte-Carlo simulation.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.