Careful design and verification of the power distribution network of a chip are of critical importance to ensure its reliable performance. With the increasing number of transistors on a chip, the size of the power network has grown so large as to make the verification task very challenging. The available computational power and memory resources impose limitations on the size of networks that can be analyzed using currently known techniques. Many of today's designs have power networks that are too large to be analyzed in the traditional way as flat networks. In this paper, we propose a hierarchical analysis technique to overcome the aforesaid capacity limitation. We present a new technique for analyzing a power grid using macromodels that are created for a set of partitions of the grid. Efficient numerical techniques for the computation and sparsification of the port admittance matrices of the macromodels are presented. A novel sparsification technique using a 0-1 integer linear programming formulation is proposed to achieve superior sparsification for a specified error. The run-time and memory efficiency of the proposed method are illustrated through the analysis of case studies of several multi-million node power grids, extracted from real microprocessor and DSP designs.With the increase in the complexity of VLSI chips, designing and analyzing a power distribution network has become a challenging task. A robust power network design is essential to ensure that the circuits on a chip operate reliably at the guaranteed level of performance. A poorly designed power network can become the cause for a variety of problems such as loss of circuit performance, noise generation, and electro-migration failures. With the increased power level and device densities of microprocessors in sub-micron technologies, these problems are more likely unless serious attention is given to power network design. Critical to obtaining a robust design is the ability to analyze the network efficiently several times in the design cycle. Several previously published research works [1-4] discuss methodologies and techniques to accomplish this task efficiently.The difficulty in power network analysis stems mainly from three sources: (i) the network is very large, typically 1 million to 100 million nodes, (ii) the network is nonlinear as it contains switching devices, and (iii) the voltage and current distribution in the network is dependent on the instruction executed on the processor. Our work, presented in this paper, addresses the first problem. The second problem is circumvented traditionally [1] by performing nonlinear simulation of individual circuit blocks without including the parasitics in the power interconnects, and then simulating the power interconnect as a whole using the time-variant current profiles, obtained in the nonlinear simulation as the excitation sources. The third problem is one of obtaining a good coverage of all possible worst case power demand situations. Manually generated "hot loops," an extensive set of input vecto...
In this paper, we propose a new sensitivity based, statistical gate sizing method. Since circuit optimization effects the entire shape of the circuit delay distribution, it is difficult to capture the quality of a distribution with a single metric. Hence, we first introduce a new objective function that provides an effective measure for the quality of a delay distribution for both ASIC and high performance designs. We then propose an efficient and exact sensitivity based pruning algorithm based on a newly proposed theory of perturbation bounds. A heuristic approach for sensitivity computation which relies on efficient computation of statistical slack is then introduced. Finally, we show how the pruning and statistical slack based approaches can be combined to obtain nearly identical results compared with the brute-force approach but with an average run-time improvement of up to 89x. We also compare the optimization results against that of a deterministic optimizer and show an improvement up to 16% in the 99-percentile circuit delay and up to 31% in the standard deviation for the same circuit area.
The growing impact of within-die process variation has created the need for statistical timing analysis, where gate delays are modeled as random variables. Statistical timing analysis has traditionally suffered from exponential run time complexity with circuit size, due to arrival time dependencies created by reconverging paths in the circuit. In this paper, we propose a new approach to statistical timing analysis that is based on statistical bounds of the circuit delay. Since these bounds have linear run time complexity with circuit size, they can be computed efficiently for large circuits. Since both a lower and upper bound on the true statistical delay is available, the quality of the bounds can be determined. If the computed bounds are not sufficiently close to each other, we propose a heuristic to iteratively improve the bounds using selective enumeration of the sample space with additional run time. We demonstrate that the proposed bounds have only a small error and that by carefully selecting an small set of nodes for enumeration, this error can be further improved.
Statistical static timing analysis (SSTA) has become a key method for analyzing the effect of process variation in aggressively scaled CMOS technologies. Much research has focused on the modeling of spatial correlation in SSTA. However, the vast majority of these works used artificially generated process data to test the proposed models. Hence, it is difficult to determine the actual effectiveness of these methods, the conditions under which they are necessary, and whether they lead to a significant increase in accuracy that warrants their increased runtime and complexity. In this paper, we study 5 different correlation models and their associated SSTA methods using 35420 critical dimension (CD) measurements that were extracted from 23 reticles on 5 wafers in a 130nm CMOS process. Based on the measured CD data, we analyze the correlation as a function of distance and generate 5 distinct correlation models, ranging from simple models which incorporate one or two variation components to more complex models that utilize principle component analysis and Quad-trees. We then study the accuracy of the different models and compare their SSTA results with the result of running STA directly on the extracted data. We also examine the trade-off between model accuracy and run time, as well as the impact of die size on model accuracy. We show that, especially for small dies (< 6.6mm x 5.7mm), the simple models provide comparable accuracy to that of the more complex ones, while incurring significantly less runtime and implementation difficulty. The results of this study demonstrate that correlation models for SSTA must be carefully tested on actual process data and must be used judiciously.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.