Formal methods and testing are two important approaches that assist in the development of high quality software. While traditionally these approaches have been seen as rivals, in recent years a new consensus has developed in which they are seen as complementary. This article reviews the state of the art regarding ways in which the presence of a formal specification can be used to assist testing.
State machines capture the sequential behavior of software systems. Their intuitive visual notation, along with a range of powerful verification and testing techniques render them an important part of the modeldriven software engineering process. There are several situations that require the ability to identify and quantify the differences between two state machines (e.g. to evaluate the accuracy of state machine inference techniques is measured by the similarity of a reverse-engineered model to its reference model). State machines can be compared from two complementary perspectives: (1) In terms of their language -the externally observable sequences of events that are permitted or not, and (2) in terms of their structure -the actual states and transitions that govern the behavior. This article describes two techniques to compare models in terms of these two perspectives. It shows how the difference can be quantified and measured by adapting existing binary classification performance measures for the purpose. The approaches have been implemented by the authors, and the implementation is openly available. Feasibility is demonstrated via a case study to compare two real state machine inference approaches. Scalability and accuracy are assessed experimentally with respect to a large collection of randomly synthesized models.
Abstract. This paper addresses the challenge of generating test sets that achieve functional coverage, in the absence of a complete specification. The inductive testing technique works by probing the system behaviour with tests, and using the test results to construct an internal model of software behaviour, which is then used to generate further tests. The idea in itself is not new, but prior attempts to implement this idea have been hampered by expense and scalability, and inflexibility with respect to testing strategies. In the past, inductive testing techniques have tended to focus on the inferred models, as opposed to the suitability of the test sets that were generated in the process. This paper presents a flexible implementation of the inductive testing technique, and demonstrates its application with case-study that applies it to the Linux TCP stack implementation. The evaluation shows that the generated test sets achieve a much better coverage of the system than would be achieved by similar non-inductive techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.