Abstract-While process variations are becoming more significant with each new IC technology generation, they are often modeled via linear regression models so that the resulting performance variations can be captured via normal distributions. Nonlinear response surface models (e.g., quadratic polynomials) can be utilized to capture larger scale process variations; however, such models result in nonnormal distributions for circuit performance. These performance distributions are difficult to capture efficiently since the distribution model is unknown. In this paper, an asymptoticprobability-extraction (APEX) method for estimating the unknown random distribution when using a nonlinear response surface modeling is proposed. The APEX begins by efficiently computing the high-order moments of the unknown distribution and then applies moment matching to approximate the characteristic function of the random distribution by an efficient rational function. It is proven that such a moment-matching approach is asymptotically convergent when applied to quadratic response surface models. In addition, a number of novel algorithms and methods, including binomial moment evaluation, PDF/CDF shifting, nonlinear companding and reverse evaluation, are proposed to improve the computation efficiency and/or approximation accuracy. Several circuit examples from both digital and analog applications demonstrate that APEX can provide better accuracy than a Monte Carlo simulation with 10 4 samples and achieve up to 10× more efficiency. The error, incurred by the popular normal modeling assumption for several circuit examples designed in standard IC technologies, is also shown.
Abstract-The large-scale process and environmental variations for today's nanoscale ICs require statistical approaches for timing analysis and optimization. In this paper, we demonstrate why the traditional concept of slack and critical path becomes ineffective under large-scale variations and propose a novel sensitivity framework to assess the "criticality" of every path, arc, and node in a statistical timing graph. We theoretically prove that the path sensitivity is exactly equal to the probability that a path is critical and that the arc (or node) sensitivity is exactly equal to the probability that an arc (or a node) sits on the critical path. An efficient algorithm with incremental analysis capability is developed for fast sensitivity computation that has linear runtime complexity in circuit size. The efficacy of the proposed sensitivity analysis is demonstrated on both standard benchmark circuits and large industrial examples.
As IC technologies scale to finer feature sizes, it becomes increasingly difficult to control the relative process variations. The increasing fluctuations in manufacturing processes have introduced unavoidable and significant uncertainty in circuit performance; hence ensuring manufacturability has been identified as one of the top priorities of today's IC design problems. In this paper, we review various statistical methodologies that have been recently developed to model, analyze, and optimize performance variations at both transistor level and system level. The following topics will be discussed in detail: sources of process variations, variation characterization and modeling, Monte Carlo analysis, response surface modeling, statistical timing and leakage analysis, probability distribution extraction, parametric yield estimation and robust IC optimization. These techniques provide the necessary CAD infrastructure that facilitates the bold move from deterministic, corner-based IC design toward statistical and probabilistic design.
In this paper we propose a novel projection-based algorithm to estimate the full-chip leakage power with consideration of both inter-die and intra-die process variations. Unlike many traditional approaches that rely on log-Normal approximations, the proposed algorithm applies a novel projection method to extract a low-rank quadratic model of the logarithm of the full-chip leakage current and, therefore, is not limited to log-Normal distributions. By exploring the underlying sparse structure of the problem, an efficient algorithm is developed to extract the non-log-Normal leakage distribution with linear computational complexity in circuit size. In addition, an incremental analysis algorithm is proposed to quickly update the leakage distribution after changes to a circuit are made. Our numerical examples in a commercial 90nm CMOS process demonstrate that the proposed algorithm provides 4x error reduction compared with the previously proposed log-Normal approximations, while achieving orders of magnitude more efficiency than a Monte Carlo analysis with 10 4 samples. Categories and Subject Descriptors INTRODUCTIONAs IC technologies move to nanoscale feature sizes, leakage power becomes increasingly large and significantly impacts the total chip power consumption. The predicted leakage power is expected to reach 50% of the total chip power within the next few technology generations [1]. Therefore, accurately modeling and analyzing leakage power has been identified as one of the top priorities for today's IC design problems.The most important leakage components in nanoscale CMOS technologies include sub-threshold leakage and gate tunneling leakage [2]. The sub-threshold leakage models the weak inversion conduction when gate voltage is below the threshold voltage. At the same time, the reduction of gate oxide thickness facilitates tunneling of electrons through gate oxide, creating the gate leakage. Both of these leakage components are significant for sub100nm technologies and must be considered for leakage analysis.Unlike many other performances (e.g., delay), leakage power varies substantially due to process variations, which increases the difficulty of leakage estimation. As demonstrated in [3], leakage variations can reach 20x, while delays only vary about 30%. It has also been observed that leakage power is sensitive to both interdie and intra-die variations. Intra-die variations model the individual, but spatially correlated, local variations within the same die. These intra-die variations must be modeled by many additional random variables, thereby significantly increasing the problem size of leakage analysis. For example, the total number of random variables can reach 10 3~1 0 6 to model the full-chip variations for a practical industry design.Many works have been developed to capture the leakage variations [4]- [10]. Most of these approaches approximate the leakage variation as a log-Normal distribution. For that purpose, a first-order (i.e., linear) Taylor expansion is used to approximate the logari...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.