Test data collection for a failing integrated circuit (IC) can be very expensive and time consuming. Many companies now collect a fix amount of test data regardless of the failure characteristics. As a result, limited data collection could lead to inaccurate diagnosis, while an excessive amount increases the cost not only in terms of unnecessary test data collection but also increased cost for test execution and data-storage. In this work, the objective is to develop a method for predicting the precise amount of test data necessary to produce an accurate diagnosis. By analyzing the failing outputs of an IC during its actual test, the developed method dynamically determines which failing test pattern to terminate testing, producing an amount of test data that is sufficient for an accurate diagnosis analysis. The method leverages several statistical learning techniques, and is evaluated using actual data from a population of failing chips and five standard benchmarks. Experiments demonstrate that test-data collection can be reduced by > 30% (as compared to collecting the full-failure response) while at the same time ensuring >90% diagnosis accuracy. Prematurely terminating test-data collection at fixed levels (e.g., 100 failing bits) is also shown to negatively impact diagnosis accuracy.
We propose to achieve and maintain ultra-high quality of digital circuits on a per-design basis by (i) monitoring the type of failures that occur through volume diagnosis, and (ii) changing the test patterns to match the current failure population characteristics. Opposed to the current approach that assumes sufficient quality levels are maintained using the tests developed during the time of design, the methodology described here presupposes that fallout characteristics can change over time but with a time constant that is sufficiently slow, thereby allowing test content to be altered so as to maximize coverage of the failure types actually occurring. Even if this assumption proves to be false, the test content can be tuned to match the characteristics of the fallout population if the fallout characteristics are unchanging. Under either scenario, it should be then possible to minimize DPPM for a given constraint on test costs, or alternatively ensure that DPPM does not exceed some pre-determined threshold. Our approach does not have to cope with situations where fallout characteristics change rapidly (e.g. excursion), since there are existing methods to deal with them. Our methodology uses a diagnosis technique that can extract defect activation conditions, a new model for estimating DPPM, and an efficient test selection method for reducing DPPM based on volume diagnosis results. Circuit-level simulation involving various types of defects shows that DPPM could be reduced by 30% using our methodology. In addition, experiments on a real silicon chip failures show that DPPM can be significantly reduced, without additional test execution cost, by altering the content (but not the size) of the applied test set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.