Test generation is a costly but necessary testing activity to increase the quality of software projects. Automated testing tools based on evolutionary computation principles constitute an appealing modern approach to support testing tasks. However, these tools still find difficulties to detect certain types of plausible faults in real-world projects. Besides, recent studies have shown that, in general, automatically-generated tests do not resemble those manually written and, consequently, testers are reluctant to adopt them. We observe two key issues, namely the opacity of the process and the lack of cooperation with the tester, currently hampering the acceptance of automated results. Based on these findings, we explore in this paper how the interaction between current tools and expert testers would help address the test case generation problem. More specifically, we identify a number of interaction opportunities related to the object-oriented test case design driven to boost their readability and detection power. Using EvoSuite as base implementation, we present a proof of concept focused on the possibility to integrate readability assessment of the most promising test suites into a genetic algorithm.
Testing is a crucial activity in the development of software systems. With the increasing complexity of software projects, the industry requires incorporating graduates with adequate testing skills and preparation in this field. A challenge in software testing education is to make students perceive the benefits of writing tests and assess their quality with advanced testing techniques. In this paper, we present an experience integrating both mutation testing and self/peer assessment -two of the most used techniques to that end in the past-into a software testing course during three years. This experience allowed us to analyze the effect of applying these strategies on the students' perception of their manually-written test suites. Noticeably, the computation of the mutation score significantly undermined the initial expectations they had on the developed test suites. Also, the application of peer testing helped them estimate the relative quality of two comparable test suites, as we found a notable correspondence with their respective mutation coverage. Besides, a more in-depth analysis revealed that the students' test suites with more test cases did not always achieve the highest scores, that they found more readable their own tests, and that they tended to cover the basic operations while forgetting about more advanced features. An opinion survey confirmed the impact that the use of mutants had on their perception about testing, and they mostly supported paying a higher level of attention to testing concepts in software engineering degree plans.Index Terms-Software testing education, mutation testing, peer testing, self/peer assessment.
Automated test case generation has proven to be useful to reduce the usually high expenses of software testing. However, several studies have also noted the skepticism of testers regarding the comprehension of generated test suites when compared to manually designed ones. This fact suggests that involving testers in the test generation process could be helpful to increase their acceptance of automatically-produced test suites. In this paper, we propose incorporating interactive readability assessments made by a tester into EvoSuite, a widely-known evolutionary test generation tool. Our approach, INTEREVO-TR, interacts with the tester at different moments during the search and shows different test cases covering the same coverage target for their subjective evaluation. The design of such an interactive approach involves a schedule of interaction, a method to diversify the selected targets, a plan to save and handle the readability values, and some mechanisms to customize the level of engagement in the revision, among other aspects. To analyze the potential and practicability of our proposal, we conduct a controlled experiment in which 39 participants, including academics, professional developers, and student collaborators, interact with INTEREVO-TR. Our results show that the strategy to select and present intermediate results is effective for the purpose of readability assessment. Furthermore, the participants' actions and responses to a questionnaire allowed us to analyze the aspects influencing test code readability and the benefits and limitations of an interactive approach in the context of test case generation, paving the way for future developments based on interactivity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.