Manual test suites are typically described by natural language, and over time large manual test suites become disordered and harder to use and maintain. This paper focuses on the challenge of providing tool support for refactoring such test suites to make them more usable and maintainable. We describe how we have applied various machine-learning and NLP techniques and other algorithms to the refactoring of manual test suites, plus the tool support we have built to embody these techniques and to allow test suites to be explored and visualised. We evaluate our approach on several industry test suites, and report on the time savings that were obtained.
The Agile and DevOps transformation of software development practices enhances the need for increased automation of functional testing, especially for regression testing. This poses challenges both in the effort that needs to be devoted to the creation and maintenance of automated test scripts, and in their relevance (i.e. their alignment with business needs). Test automation is still difficult to implement and maintain and the return on investment comes late while projects tend to be short. In this context, we have experimented a lightweight model-based test automation approach to address both productivity and relevance challenges. It integrates test automation through a simple process and tool-chain experimented on large IT projects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.