Abstract.Testing model transformations requires input models which are graphs of inter-connected objects that must conform to a meta-model and meta-constraints from heterogeneous sources such as well-formedness rules, transformation preconditions, and test strategies. Manually specifying such models is tedious since models must simultaneously conform to several meta-constraints. We propose automatic model generation via constraint satisfaction using our tool Cartier for model transformation testing. Due to the virtually infinite number of models in the input domain we compare strategies based on input domain partitioning to guide model generation. We qualify the effectiveness of these strategies by performing mutation analysis on the transformation using generated sets of models. The test sets obtained using partitioning strategies gives mutation scores of up to 87% vs. 72% in the case of unguided/random generation. These scores are based on analysis of 360 automatically generated test models for the representative transformation of UML class diagram models to RDBMS models.
Introduction Model Driven Engineering (MDE) techniques support extensive use of models in order to manage the increasing complexity of software systems. Appropriate abstractions of software system elements can ease reasoning and understanding and thus limit the risk of errors in large systems. Automatic model transformations play a critical role in MDE since they automate complex, tedious, error-prone, and recurrent software development tasks. Airbus uses automatic code synthesis from SCADE models to generate the code for embedded controllers in the Airbus A380. Commercial tools for model transformations exist. Objecteering and Together from Borland are tools that can automatically add design patterns in a UML class model. Esterel Technologies have a tool for automatic code synthesis for safety critical systems. Other examples of transformations are refinement of a design model by adding details pertaining to a particular target platform, refactoring a model by changing its structure to enhance design quality, or reverse engineering code to obtain an abstract model. These software development tasks are critical and thus the model transformations that automate them must be validated. A fault in a transformation can introduce a fault in the transformed model, which if undetected and not removed, can propagate to other models in successive development steps. As a fault propagates across transformations, it becomes more difficult to detect and isolate. Since model transformations are meant to be reused, faults present in them may result in many faulty models. Model transformations constitute a class of programs with unique characteristics that make testing them challenging. The complexity of input and output data, lack of model management tools, and the heterogeneity of transformation languages pose special problems to testers of transformations. In this paper we identify current model transformation characteristics that contribute to the difficulty of systematically testing transformations. We present promising solutions and propose possible ways to overcome these barriers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.