Several strategies have supported test quality measurement and analysis. For example, code coverage, a widely used one, enables verification of the test case to cover as many source code branches as possible. Another set of affordable strategies to evaluate the test code quality exists, such as test smells analysis. Test smells are poor design choices in test code implementation, and their occurrence might reduce the test suite quality. A practical and largescale test smells identification depends on automated tool support. Otherwise, test smells analysis could become a cost-ineffective strategy. In an earlier study, we proposed the JNose Test, automated tool support to detect test smells and analyze test suite quality from the test smells perspective. This study extends the previous one in two directions: i) we implemented the JNose-Core, an API encompassing the test smells detection rules. Through an extensible architecture, the tool is now capable of accomodating new detection rules or programming languages; and ii) we performed an empirical study to evaluate the JNose Test effectiveness and compare it against the state-of-the-art tool, the tsDetect. Results showed that the JNose-Core precision score ranges from 91% to 100%, and the recall score from 89% to 100%. It also presented a slight improvement in the test smells detection rules compared to the tsDetect for the test smells detection at the class level.
Developing test code may be a time-consuming process that requires much effort and cost, especially when done manually. In addition, during this process, developers and testers are likely to adopt bad design choices, which may lead to introducing the so-called test smells in the test code. As the test code with test smells size increases, these tests might become more complex, and as a consequence, much more challenging to understand and evolve them correctly. Therefore, test smells may harm the test code quality and maintenance and break the whole software testing activities. In this context, this study aims to understand whether software testing practitioners unintentionally insert test smells when they implement test code. We first carried out an expert survey to analyze the usage frequency of a set of test smells and then interviews to reach a deeper understanding of how practitioners deal with test smells. Sixty professionals participated in the survey, and fifty professionals participated in the interviews. The yielded results indicate that experienced professionals introduce test smells during their daily programming tasks, even when using their companies’ standardized practices. Additionally, tools support test development and quality improvement, but most interviewees are not aware of test smells’ concepts.
No abstract
Developing test code may be a time-consuming task that usually requires much effort and cost, especially when it is done manually. Besides, during this process, developers and testers are likely to adopt bad design choices, which may lead to the introduction of the so-called test smells in test code. Test smells are bad solutions to either implement or design test code. As the test code with test smells increases in size, these tests might become more complex, and as a consequence, much harder to understand and evolve correctly. Therefore, test smells may have a negative impact on the quality and maintenance of test code and may also harm the whole software testing activities. In this context, this study aims to understand whether test professionals non-intentionally insert test smells. We carried out an expert survey to analyze the usage frequency of a set of test smells. Sixty professionals from different companies participated in the survey. We selected 14 widely studied smells from the literature, which are also implemented in existing test smell detection tools. The yielded results indicate that experienced professionals introduce test smells during their daily programming tasks, even when they are using standardized practices from their companies, and not only for their personal assumptions. Another relevant evidence was that developers' professional experience can not be considered as a root-cause for the insertion of test smells in test code.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.