Test-driven development is an essential part of eXtreme Programming approach with the preference of being followed in other Agile methods as well. For several years, researchers are performing empirical investigations to evaluate quality improvements in the resulting code when test-driven development is being used. However, very little had been reported into investigating the quality of the testing performed in conjunction with test-driven development.In this paper we present results from an experiment specifically designed to evaluate the quality of test cases created by developers who used the test-first and the traditional test-last approaches. On an average, the quality of testing in test-driven development was almost the same as the quality of testing using test-last approach. However, detailed analysis of test cases, created by test-driven development group, revealed that 29% of test cases were "negative" test cases (based on non-specified requirements) but contributing as much as 65% to the overall tests quality score of test-first developers.We are currently investigating the possibility of extending testdriven development to facilitate non-specified requirements to a higher extent and thus minimise the impact of a potentially inherent effect of positive test bias.
In engineering of safety critical systems, regulatory standards often put requirements on both traceable requirements-based testing, and structural coverage on system software units. Automated test input generation techniques can be used to generate test data to cover the structural aspects of a program. However, there is no conclusive evidence on how automated test input generation compares to manual test generation, or how test case generation based on the program structure compares to specification-based test case generation. This paper aims at investigating automated and manual specification-and implementation-based testing of IEC 61131-3 Function Block Diagram Code, a programming standard in many embedded safety critical software systems, measuring the efficiency and effectiveness in terms of fault detection. For this purpose, a controlled experiment was conducted, comparing both test suites created by a total of 23 industrial software engineering master students. The experiment participants worked individually on manually designing and automatically generating tests for two industrial programs. All tests created by each participant during the experiment were collected and analyzed by means of mutation score, decision coverage, test length and time required for creating, executing, and checking the test. We found that, when compared to implementation-based testing, specification-based testing yields significantly more effective test cases in terms of the number of faults detected. Specifically, specification-based tests more effectively detect comparison and value replacement type of faults, compared to implementationbased tests. On the other hand, implementation-based automated testing leads to shorter tests (up to 85% improvement) created in less time than the ones manually created based on the specification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.