Modern digital systems are growing in size and complexity, introducing significant organizational and verification challenges in the design cycle. Verification today takes as much as 70% of the design time with debugging being responsible for half of this effort. Automation has mitigated part of the resourceintensive nature of rectifying erroneous designs. Nevertheless, most tools target failures in isolation. Since regression verification can discover myriads of failures in one run, automation is also required to guide an engineer to rank them and expedite debugging. To address this growing regression pain, this paper presents a framework that utilizes traditional machine learning techniques along with historical data in version control systems and the results of functional debugging. Its aim is to rank revisions based on their likelihood of being responsible for a particular failure. Ranking prioritizes revisions that ought to be targeted first, and therefore it speeds-up the localization of the error source. This effectively reduces the number of debug iterations. Experiments on industrial designs demonstrate a 68% improvement in the ranking of actual erroneous revisions versus the ranking obtained through existing industrial methodologies. This benefit arrives with negligible run-time overhead.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.