“…First, in preliminary tests, we detected that especially combined modifications to elements are challenging to the model differencing framework. Second, refactoring operations are well documented and already shown to occur frequently in real-world scenarios (Sidhu et al 2018;Tsantalis et al 2018).…”
Section: Evolution Scenariosmentioning
confidence: 99%
“…Table 1 lists the considered evolution scenarios. Each scenario is based on the equally named refactoring operation as described by Sidhu et al (2018). While these refactoring operations were explicitly created for UML class diagrams, the kinds of change can be mapped to other domains.…”
Section: Evolution Scenariosmentioning
confidence: 99%
“…Regarding the external validity of our evaluation results, we argue that similar results can be obtained with other models of a different domain, as the problems shown in the change sequence are metamodel-independent. Furthermore, we argue that the evolution scenarios in our evaluation are relevant in practice, as they are taken from existing literature regarding common refactoring patterns for object-oriented software and models (Wimmer et al 2012;Sidhu et al 2018;Tsantalis et al 2013;Fowler 2019). While it would have been possible to employ randomly generated evolution scenarios instead, most of the random scenarios could not be considered realistic.…”
While developers and users of modern software systems usually only need to interact with a specific part of the system at a time, they are hindered by the ever-increasing complexity of the entire system. Views are projections of underlying models and can be employed to abstract from that complexity. When a view is modified, the changes must be propagated back into the underlying model without overriding simultaneous modifications. Hence, the view needs to provide a fine-grained sequence of changes to update the model minimally invasively. Such fine-grained changes are often unavailable for views that integrate with existing workflows and tools. To this end, model differencing approaches can be leveraged to compare two states of a view and derive an estimated change sequence. However, these model differencing approaches are not intended to operate with views, as their correctness is judged solely by comparing the input models. For views, the changes are derived from the view states, but the correctness depends on the underlying model. This work introduces a refined notion of correctness for change sequences in the context of model-view consistency. Furthermore, we evaluate state-of-the-art model differencing regarding model-view consistency. Our results show that model differencing largely performs very well. However, incorrect change sequences were derived for two common refactoring operation types, leading to an incorrect model state. These types can be easily reproduced and are likely to occur in practice. By considering our change sequence properties in the view type design, incorrect change sequences can be detected and semi-automatically repaired to prevent such incorrect model states.
“…First, in preliminary tests, we detected that especially combined modifications to elements are challenging to the model differencing framework. Second, refactoring operations are well documented and already shown to occur frequently in real-world scenarios (Sidhu et al 2018;Tsantalis et al 2018).…”
Section: Evolution Scenariosmentioning
confidence: 99%
“…Table 1 lists the considered evolution scenarios. Each scenario is based on the equally named refactoring operation as described by Sidhu et al (2018). While these refactoring operations were explicitly created for UML class diagrams, the kinds of change can be mapped to other domains.…”
Section: Evolution Scenariosmentioning
confidence: 99%
“…Regarding the external validity of our evaluation results, we argue that similar results can be obtained with other models of a different domain, as the problems shown in the change sequence are metamodel-independent. Furthermore, we argue that the evolution scenarios in our evaluation are relevant in practice, as they are taken from existing literature regarding common refactoring patterns for object-oriented software and models (Wimmer et al 2012;Sidhu et al 2018;Tsantalis et al 2013;Fowler 2019). While it would have been possible to employ randomly generated evolution scenarios instead, most of the random scenarios could not be considered realistic.…”
While developers and users of modern software systems usually only need to interact with a specific part of the system at a time, they are hindered by the ever-increasing complexity of the entire system. Views are projections of underlying models and can be employed to abstract from that complexity. When a view is modified, the changes must be propagated back into the underlying model without overriding simultaneous modifications. Hence, the view needs to provide a fine-grained sequence of changes to update the model minimally invasively. Such fine-grained changes are often unavailable for views that integrate with existing workflows and tools. To this end, model differencing approaches can be leveraged to compare two states of a view and derive an estimated change sequence. However, these model differencing approaches are not intended to operate with views, as their correctness is judged solely by comparing the input models. For views, the changes are derived from the view states, but the correctness depends on the underlying model. This work introduces a refined notion of correctness for change sequences in the context of model-view consistency. Furthermore, we evaluate state-of-the-art model differencing regarding model-view consistency. Our results show that model differencing largely performs very well. However, incorrect change sequences were derived for two common refactoring operation types, leading to an incorrect model state. These types can be easily reproduced and are likely to occur in practice. By considering our change sequence properties in the view type design, incorrect change sequences can be detected and semi-automatically repaired to prevent such incorrect model states.
“…Fig. 10 shows the articles by the [3,113,133,156,165,174] Requirement Traceability [47,73,130,139,203,222] Architecture and Design Design Modeling [2,37,46,56,63,135,136,142,146,181,190,192,199,221,226…”
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems. However, the full potential of machine learning for improving the software engineering life cycle itself is yet to be discovered, i.e., up to what extent machine learning can help reducing the effort/complexity of software engineering and improving the quality of resulting software systems. To date, no comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages. Objective: This article addresses the aforementioned problem and aims to present a state-of-the-art on the growing number of uses of machine learning in software engineering. Method: We conduct a systematic mapping study on applications of machine learning to software engineering following the standard guidelines and principles of empirical software engineering.Results: This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages. Overall, 227 articles were rigorously selected and analyzed as a result of this study. Conclusion: From the selected articles, we explore a variety of aspects that should be helpful to academics and practitioners alike in understanding the potential of adopting machine learning techniques during software engineering projects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.