-Test driven development (TDD) is one of the basic practices of agile software development and both academia and practitioners claim that TDD, to a certain extent, improves the quality of the code produced by developers. However, recent results suggest that this practice is not followed to the extent preferred by industry. In order to pinpoint specific obstacles limiting its industrial adoption we have conducted a systematic literature review on empirical studies explicitly focusing on TDD as well as indirectly addressing TDD. Our review has identified seven limiting factors viz., increased development time, insufficient TDD experience/knowledge, lack of upfront design, domain and tool specific issues, lack of developer skill in writing test cases, insufficient adherence to TDD protocol, and legacy code. The results of this study is of special importance to the testing community, since it outlines the direction for further detailed scientific investigations as well as highlights the requirement of guidelines to overcome these limiting factors for successful industrial adoption of TDD.
RESTful APIs are an increasingly common way to expose software systems functionality and it is therefore of high interest to find methods to automatically test and verify such APIs. To lower the barrier for industry adoption, such methods needs to be straightforward to use with a low effort. This paper introduces a method to explore the behaviour of a RESTful API. This is done by using automatic property-based tests produced from OpenAPI documents that describe the REST API under test. We describe how this method creates artifacts that can be leveraged both as property-based test generators and as a source of validation for results (i.e., as test oracles). Experimental results, on both industrial and open source services, indicate how this approach is a low effort way of finding real faults. Furthermore, it supports building additional knowledge about the system under test by automatically exposing misalignment of specification and implementation. Since the tests are generated from the OpenAPI document this method automatically evolves test cases as the REST API evolves.
SummaryAutomated software testing is a critical enabler for modern software development, where rapid feedback on the product quality is expected. To make the testing work well, it is of high importance that impediments related to test automation are prevented and removed quickly. An enabling factor for all types of improvement is to understand the nature of what is to be improved. We have performed a systematic literature review of reported impediments related to software test automation to contribute to this understanding. In this paper, we present the results from the systematic literature review: The list of identified publications, a categorization of identified impediments, and a qualitative discussion of the impediments proposing a socio-technical system model of the use and implementation of test automation. 30% to 80% of the development costs are reported to be related to testing [2,3] . The high use of test automation also makes the automated tests mission critical: If the test automation stops working or slows down, the development will also stop or slow down.The investment cost for test automation can be significant. It is difficult to propose a generalizable cost model, due to a lack of published information available on the cost of ownership for test automation. However, the size and complexity of a test system can be in the order of, or often larger than, the complexity and size of the tested product [10,11][IP7] [12]. This makes it reasonable to conjecture that the cost of ownership of an automated test execution system is in the order of the cost of development of the product it is testing. Even if a lot less effort is spent on the test systems than on the sold products, it still represents a major investment for the software business as a whole. If the test automation does not perform as well as expected, this investment is unlikely to be recovered as planned, which may have significant consequences for the business. Further, it is reasonable to conjecture that test systems suffer from similar technical debt issues as regular systems do. Technical debt is a term that is used to describe the gap between the current state of the system and the ideal state of the system [13], usually in situations where a compromise is made during development to meet demands in one dimension, such as lead time, by sacrificing work in another dimension, such as architecture.
Background: Test driven development, as a sideeffect of developing software, will produce a set of accompanied test cases which can protect implemented features during code refactoring. However, recent research results point out that successful adoption of test driven development might be limited by the testing skills of developers using it. Aim: Main goal of this paper is to investigate if there is a difference between the quality of test cases created while using test-first and test-last approaches. Additional goal of this paper is to measure the code quality produced using test-first and test-last approaches. Method: A pilot study was conducted during the master level course on Software Verification & Validation at Mälardalen University. Students were working individually on the problem implementation by being randomly assigned to a test-first or a test-last (control) group. Source code and test cases created by each participant during the study, as well as their answers on a survey questionnaire after the study, were collected and analysed. The quality of the test cases is analysed from three perspectives: (i) code coverage, (ii) mutation score and (iii) the total number of failing assertions. Results: The total number of test cases with failing assertions (test cases revealing an error in the code) was nearly the same for both test-first and test-last groups. This can be interpreted as "test cases created by test-first developers were as good as (or as bad as) test cases created by test-last developers". On the contrary, solutions created by test-first developers had, on average, 27% less failing assertions when compared to solutions created by the test-last group. Conclusions: Though the study provided some interesting observations, it needs to be conducted as a fully controlled experiment with a higher number of participants in order to validate statistical significance of the presented results. Index Terms-software testing; test case efficiency; test driven development; controlled experiment;
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.