Model-Driven Engineering promotes the migration from codecentric to model-based software development. Systems consist of model collections integrating different concerns and perspectives, while semiautomated model transformations analyse quality attributes and generate executable code combining the information from these. Raising the abstraction level to models requires appropriate management technologies supporting the various software development activities. Among these, model comparison represents one of the most challenging tasks and plays an essential role in various modelling activities. Its hardness led researchers to propose a multitude of approaches adopting different approximation strategies and exploiting specific knowledge of the involved models. In this respect, almost no support is provided for the systematic evaluation of comparison approaches against specific scenarios and modelling practices, namely benchmarks.In this article we propose Benji, a framework for the automated generation of model comparison benchmarks. In particular, by giving a set of difference patterns and an initial model, users can generate model manipulation scenarios resulting from the application of the patterns on the model. The generation support provided by the framework obeys specific design principles that are considered as essential properties for the systematic evaluation of model comparison solutions, and are inherited from the general principles coming from evidence-based software engineering. The framework is validated through representative scenarios of model comparison benchmark generations.
Software-intensive systems in most domains, from autonomous vehicles to health, are becoming predominantly parallel to efficiently manage large amount of data in short (even real-) time. There is an incredibly rich literature on languages for parallel computing, thus it is difficult for researchers and practitioners, even experienced in this very field, to get a grasp on them. With this work we provide a comprehensive, structured, and detailed snapshot of documented research on those languages to identify trends, technical characteristics, open challenges, and research directions. In this article, we report on planning, execution, and results of our systematic peer-reviewed as well as grey literature review, which aimed at providing such a snapshot by analysing 225 studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.