Model-based testing relies on behavior models for the generation of model traces: input and expected output-test cases-for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than handcrafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of modelbased tests led to an 11% increase in detected errors.
The idea of model-based testing is to compare the I/O behavior of an explicit behavior model with that of a system under test. This requires the model to be valid. If the model is a simplification of the SUT, then it is easier to check the model and use it for subsequent test case generation than to directly check the SUT. In this case, the different levels of abstraction must be bridged. Not surprisingly, experience shows that choosing the right level of abstraction is crucial to the success of model-based testing. We argue that models for specification purposes, models for test generation, and models for full code generation are likely to be different. The paper classifies and discusses different abstractions. It is intended as a step towards guidelines for those who build behavior models to the end of testing.
For behavior models expressed in statechartlike formalisms, we show how to compute semantically equivalent yet structurally different models. These refactorings are defined by user-provided logical predicates that partition the system's state space and that characterize coherent parts -modes or control states -of the behavior. We embed the refactorings into an incremental development process that uses a combination of both tables and graphically represented state machines for describing systems.
The definition of a transparent software architecture is one of the key issues in the early development phases for complex distributed and reactive software systems. In this paper, we show how to derive an architecture systematically for systems with communication models based on broadcasting. Adequate graphical description techniques for capturing interaction requirements and logical component architectures for broadcasting systems are unavailable so far. We introduce an extension to UML's sequence diagrams to capture broadcasting scenarios. Furthermore, we present methodological steps for constructively deriving structural and behavioral aspects of the architecture under consideration from the captured scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.