Soft error reliability has become a first-order design criterion for modern microprocessors. Architectural Vulnerability Factor (AVF) modeling is often used to capture the probability that a radiation-induced fault in a hardware structure will manifest as an error at the program output. AVF estimation requires detailed microarchitectural simulations which are time-consuming and typically present aggregate metrics. Moreover, it requires a large number of simulations to derive insight into the impact of microarchitectural events on AVF. In this work we present a first-order mechanistic analytical model for computing AVF by estimating the occupancy of correct-path state in important microarchitecture structures through inexpensive profiling. We show that the model estimates the AVF for the reorder buffer, issue queue, load and store queue, and functional units in a 4-wide issue machine with a mean absolute error of less than 0.07. The model is constructed from the first principles of out-of-order processor execution in order to provide novel insight into the interaction of the workload with the microarchitecture to determine AVF. We demonstrate that the model can be used to perform design space explorations to understand trade-offs between soft error rate and performance, to study the impact of scaling of microarchitectural structures on AVF and performance, and to characterize workloads for AVF.
In modern processors, both the hardware implementation and optimizing compilers are very complex, and they often interact in unpredictable ways. A high performance microarchitecture typically issues instructions out-of-order and must deal with a number of disruptive miss events such as branch mispredictions and cache misses. An optimizing compiler implements a large number of individual optimizations which not only interact with the microarchitecture, but also interact with each other. These interactions can be constructive, destructive, or neutral. Furthermore, whether there is performance gain or loss often depends on the particular program being optimized and executed.In practice, the only way that the performance gain (or loss) for a given compiler optimization can be determined is by running optimized programs on the hardware and timing them. This method, while useful, does not provide insight regarding the underlying causes for performance gain/loss. By using the recently proposed method of interval analysis [1, 2], one can decompose total execution time into intuitively meaningful cycle components. These components include a base cycle count, which is a measure of the time required to execute the program in the absence of all disruptive miss events, along with additional cycle counts for each type of miss event. Performance gain (or loss) resulting from a compiler optimization can then be attributed to either the base cycle count or to specific miss event(s).By analyzing the various cycle count components for a wide range of compiler optimizations one can gain insight into the underlying mechanisms by which compiler optimizations affect out-of-order processor performance. The work reported here provides and supports a number of key insights. Some of these insights provide quantitative support for conventional wisdom, while others provide a fresh view of how compiler optimizations interact with superscalar processor performance. To be more specific: Interval Analysis.We demonstrate the use of interval analysis for studying the impact of compiler optimiza- * Stijn Eyerman and Lieven Eeckhout are Research and Postdoctoral Fellows, respectively, with the Fund for Scientific Research-Flanders (Belgium) (FWO-Vlaanderen).tions on superscalar processor performance; this is done by breaking up the total execution time into cycle components and by analyzing the effect of compiler optimizations on the various cycle components. Compiler builders can use this methodology to better understand the impact of compiler optimizations. Evaluating Compiler Optimizations. Our analysis provides a number of interesting insights with respect to how compiler optimizations affect out-of-order processor performance. For one, the critical path leading to mispredicted branches is the only place during program execution where optimizations reducing the length of the chain of dependent operations affect overall performance on a balanced out-oforder processor -inter-operation dependencies not residing on the critical path leading to a mis...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.