Search based software testing has emerged in recent years as an important research area within automated software test data generation. The general approach of couching the satisfaction of test goals as numerical optimisation problems has been applied to a variety of problems such as satisfying structural coverage criteria, specification falsification, exception generation, breaking unit pre-conditions and software hazard discovery. However, some test goals may be hard to satisfy. For example, a program branch may be difficult to reach via a search based technique, because the domain of the data that causes it to be taken is exceedingly small or the non-linearity of the "fitness landscape" precludes the provision of effective guidance to the search for test data. In this paper we propose to "stretch" relevant conditions within a program to make them easier to satisfy. We find test data that satisfies the corresponding test goal of the stretched program. We then seek to transform the stretched program by stages back to the original, simultaneously migrating the obtained test data to produce test data that satisfies the goal for the original program. The "stretching" device is remarkably simple and shows significant promise for obtaining hard-to-find test data and also gives efficiency improvements over standard search based testing approaches.1. Introduction
Dynamic TestingDynamic testing -"the dynamic verification of the behaviour of a program on a finite set of test cases, suitably selected from the usually infinite executions domain, against the expected behaviour" [1] -is used to gain confidence in almost all developed software. Various static approaches can be used to gain further confidence but it is generally felt that only dynamic testing can provide confidence in the correct functioning of the software in its intended environment.We cannot perform exhaustive testing because the domain of program inputs is usually too large and also there are too many possible execution paths. Therefore, the software is tested using a suitably selected set of test cases. A variety of coverage criteria have been proposed to assess how effective test sets are likely to be. Historically, criteria exercising aspects of control flow, such as statement and branch coverage [2], have been the most common. Further criteria, such as data flow [3], or else sophisticated condition-oriented criteria such as MC/DC coverage [4] have been adopted for specific application domains. Many of these criteria are motivated by general principles (e.g. you cannot have much confidence in the correctness of a statement without exercising it); others target specific commonly occurring fault types (e.g. boundary value coverage).Finding a set of test data to achieve identified coverage criteria is typically a labour-intensive activity consuming a good part of the resources of the software development process. Automation of this process can greatly reduce the cost of testing and hence the overall cost of the system. Many automated test data gen...