Software has bugs, and fixing those bugs pervades the software engineering process. It is folklore that bug fixes are often buggy themselves, resulting in bad fixes, either failing to fix a bug or creating new bugs. To confirm this folklore, we explored bug databases of the Ant, AspectJ, and Rhino projects, and found that bad fixes comprise as much as 9% of all bugs. Thus, detecting and correcting bad fixes is important for improving the quality and reliability of software. However, no prior work has systematically considered this bad fix problem, which this paper introduces and formalizes. In particular, the paper formalizes two criteria to determine whether a fix resolves a bug: coverage and disruption. The coverage of a fix measures the extent to which the fix correctly handles all inputs that may trigger a bug, while disruption measures the deviations from the program's intended behavior after the application of a fix. This paper also introduces a novel notion of distance-bounded weakest precondition as the basis for the developed practical techniques to compute the coverage and disruption of a fix.To validate our approach, we implemented FIXATION, a prototype that automatically detects bad fixes for Java programs. When it detects a bad fix, FIXATION returns an input that still triggers the bug or reports a newly introduced bug. Programmers can then use that bug-triggering input to refine or reformulate their fix. We manually extracted fixes drawn from real-world projects and evaluated FIXATION against them: FIXATION successfully detected the extracted bad fixes. *
Change impact analysis is a useful technique for software evolution. It determines the effects of a source editing session and provides valuable feedbacks to the programmers for making correct decisions. Recently, many techniques have been proposed to support change impact analysis of procedural or object-oriented software, but seldom effort has been made for aspect-oriented software. In this paper we propose a new change impact analysis technique for AspectJ programs. At the core of our approach is the atomic change representation which can precisely capture semantic differences between two versions of an AspectJ program. We also present a change impact model, based on static AsepctJ call graph construction, to determine the impacted program parts, affected tests and their responsible affecting changes. As an application of change impact analysis, we discuss how our model can help programmers locate the exact failure reason by narrowing down those affecting changes when debugging AspectJ programs.The proposed techniques have been implemented in Celadon, a change impact analysis framework for AspectJ programs. We performed an experimental evaluation of the proposed techniques on 24 versions of 8 AspectJ benchmarks. The results show that our proposed technique can effectively perform change impact analysis and provide valuable debugging information for AspectJ programs.
The accuracy of a query optimizer is intricately connected with a database system performance and its operational cost: the more accurate the optimizer's cost model, the better the resulting execution plans. Database application programmers and other practitioners have long provided anecdotal evidence that database systems differ widely with respect to the quality of their optimizers, yet, to date no formal method is available to database users to assess or refute such claims.In this paper, we develop a framework to quantify an optimizer's accuracy for a given workload. We make use of the fact that optimizers expose switches or hints that let users influence the plan choice and generate plans other than the default plan. Using these implements, we force the generation of multiple alternative plans for each test case, time the execution of all alternatives and rank the plans by their effective costs. We compare this ranking with the ranking of the estimated cost and compute a score for the accuracy of the optimizer.We present initial results of an anonymized comparisons for several major commercial database systems demonstrating that there are in fact substantial differences between systems. We also suggest ways to incorporate this knowledge into the commercial development process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.