Mutation testing measures the adequacy of a test suite by seeding artificial defects (mutations) into a program. If a mutation is not detected by the test suite, this usually means that the test suite is not adequate. However, it may also be that the mutant keeps the program's semantics unchangedand thus cannot be detected by any test. Such equivalent mutants have to be eliminated manually, which is tedious.We assess the impact of mutations by checking dynamic invariants. In an evaluation of our JAVALANCHE framework on seven industrial-size programs, we found that mutations that violate invariants are significantly more likely to be detectable by a test suite. As a consequence, mutations with impact on invariants should be focused upon when improving test suites. With less than 3% of equivalent mutants, our approach provides an efficient, precise, and fully automatic measure of the adequacy of a test suite.
Researchers have proposed a number of tools for automatic bug localization. Given a program and a description of the failure, such tools pinpoint a set of statements that are most likely to contain the bug. Evaluating bug localization tools is a difficult task because existing benchmarks are limited in size of subjects and number of bugs. In this paper we present iBUGS, an approach that semiautomatically extracts benchmarks for bug localization from the history of a project. For ASPECTJ, we extracted 369 bugs, 223 out of these had associated test cases. We demonstrate the relevance of our dataset with a case study on the bug localization tool AMPLE.
Abstract.A common method to localize defects is to compare the coverage of passing and failing program runs: A method executed only in failing runs, for instance, is likely to point to the defect. Some failures, though, come to be only through a specific sequence of method calls, such as multiple deallocation of the same resource. Such sequences can be collected from arbitrary Java programs at low cost; comparing object-specific sequences predicts defects better than simply comparing coverage. In a controlled experiment, our technique pinpointed the defective class in 39% of all test runs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.