This paper describes a functional verification methodology based on a system developed at the IBM Microelectronics Embedded PowerPC Design Center, in order to improve the coverage and convergence of random test generators in general and model-based random test generators in particular. It outlines specific tasks and methods devised for qualifying the test generators at various stages of the functional verification process to ensure the integrity of generated tests. It describes methods for calibrating the test generation process to improve functional coverage. In addition, it outlines a strategy for improved management and control of the test generation for faster convergence across corner cases, complex scenarios, and deep interdependencies. The described methodology and its associated verification platform are deployed at the IBM Embedded PowerPC Design Center in Research Triangle Park, North Carolina and has been used in the verification of 4XX and 4XXFPU family of PowerPC Processors.
IntroductionTest generators have become an important part of functional verification. In the advent of the ever-increasing complexity of designs, decreasing design cycles, and cost constrained projects resulting in increased burden on verification engineers, processor design teams are becoming increasingly dependent on automatic test generators. A model-based system operates on a model of the structure and behavior of a device or the function that a system is designed to simulate [2]. Observed behavior (what the device is actually doing) is compared with predicted behavior (what the device should do). The differences between observed behavior and predicted behavior are identified as discrepancies, indicating potential defects. The inference component of such a model-based system (e.g. its model-based reasoning engine) is then initiated to diagnose the nature and location of any defects.A model-based system is usually comprised of several independent components (i.e. models, methods, inference) [3]. Any result generated is based on and influenced by all relevant models and methods. Changes in the inference will impact the quality of the output results for the same set of models and changes in any of the models will impact the result of such a system even if the inference and generation methods remain the same. Ensuring the integrity and quality of solutions generated is an important ongoing activity in development and maintenance of such systems. Providing feedback and guidance to users of such a system on proper utilization and adjustments required to existing methods and procedures (i.e. what adjustments users have to make to the models or methods they have already developed) is another important ongoing activity in such an environment [4].