Abstruct-Statistical testing of software establishes a basis for statistical inference about a software system's expected field quality. This paper describes a method for statistical testing based on a Markov chain model of software usage. The significance of the Markov chain is twofold. First, it allows test input sequences to be generated from multiple probability distributions, making it more'general than many existing techniques. Analytical results associated with Markov chains facilitate informative analysis of the sequences before they are generated, indicating how the test is likely to unfold. Second, the test input sequences generated from the chain and applied to the software are themselves a stochastic model and are used to create a second Markov chain to encapsulate the history of the test, including any observed failure information. The influence of the failures is assessed through analytical computations on this chain. We also derive a stopping criterion for the testing process based on a comparison of the sequence generating properties of the two chains.
A procedure for modeling software usage with the finite state, discrete parameter Markov chain is described. It involves rigorous analysis of the specification before design and coding begin. Many benefits emerge from this process, including the ability to synthesize a macro level usage distribution from a micro level understanding of how the software will be used. This usage distribution becomes the basis for a statistical test of the software, which is fundamental to the Cleanroom development process. Some analytical results known for Markov chains that have meaningful implications and interpretations for the software development process are described.
Software testing requires the use of a model to guide such efforts as test selection and test verification. Often, such models are implicit, existing only in the head of a human tester, applying test inputs in an ad hoc fashion. The mental model testers build encapsulates application behavior, allowing testers to understand the application's capabilities and more effectively test its range of possible behaviors. When these mental models are written down, they become sharable, reusable testing artifacts. In this case, testers are performing what has become to be known as model-based testing. Model-based testing has recently gained attention with the popularization of models (including UML) in software design and development. There are a number of models of software in use today, a few of which make good models for testing. This paper introduces model-based testing and discusses its tasks in general terms with finite state models (arguably the most popular software models) as examples. In addition, advantages, difficulties, and shortcoming of various model-based approaches are concisely presented. Finally, we close with a discussion of where model-based testing fits in the present and future of software engineering.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.