Multi-Level Modeling is receiving increasing levels of interest and its active research community is continuing to make progress. However, to advance the discipline effectively it is necessary to increase industry adoption and achieve better community cohesion. We believe that the key to addressing both these challenges is to promote the creation of more comparisons in the multi-level modeling field based on meaningful objective evaluations. In this position paper, we provide our view on what constitutes meaningful evaluations and discuss some of the issues involved in obtaining them, while presenting a broad overview of existing multi-level modeling evaluations. In particular, we emphasize the importance of understanding and managing the difference between internal and external qualities.
© Springer International Publishing Switzerland 2016. The introduction of ontological classification to support domain-metamodeling has been pivotal in the emergence of multi-level modeling as a dynamic research area. However, existing expositions of ontological classification have only used a limited context to distinguish it from the historically more commonly used linguistic classification. In important areas such as domain-specific languages and classic language engineering the distinction can appear to become blurred and the role of ontological classification is obscured, if not fundamentally challenged. In this paper we therefore examine critical points of confusion regarding the distinction and provide an expanded explanation of the differences. We maintain that optimally utilizing ontological classification, even for tasks that traditionally have only been viewed as language engineering, is critical for mastering the challenges in complex systems modeling including the validation of multi-language models.
© 2015 Elsevier B.V. Context Since multi-level modelling emerged as a strategy for leveraging classification levels in conceptual models, there have been discussions about what it entails and how best to support it. Recently, some authors have claimed that the deep modelling approach to multi-level modelling entails paradoxes and significant weaknesses. By drawing upon concepts from speech act theory and foundational ontologies these authors argue that hitherto accepted principles for deep modelling should be abandoned and an alternative approach be adopted instead (Eriksson et al., 2013). Objective We investigate the validity of these claims and motivate the need to shift the focus of the debate from philosophical arguments to modelling pragmatics. Method We present each of the main objections raised against deep modelling in turn, classify them according to the kinds of arguments put forward, and analyse the cogency of the supporting justification. We furthermore analyse the counter proposal regarding its pragmatic value for modellers. Results Most of the criticisms against deep modelling are based on mismatches between the premisses used in published definitions of deep modelling and those used by the authors as the basis of their challenges. Hence, most of the criticisms levelled at deep modelling do not actually apply to deep modelling as defined in the literature. We also explain how the proposed alternative introduces new problems of its own, and evaluate its merits from a pragmatic modelling perspective. Finally, we show how deep modelling is indeed compatible with, and can be founded on, classic work in linguistics and logic. Conclusions The inappropriate interpretations of the core principles of deep modelling identified in this article indicate that previous descriptions of them have not had sufficient clarity. We therefore provide further clarification and foundational background material to reduce the chance for future misunderstandings and help establish deep modelling as a solid foundation for multi-level modelling.
© 2015 Elsevier B.V. Context Since multi-level modelling emerged as a strategy for leveraging classification levels in conceptual models, there have been discussions about what it entails and how best to support it. Recently, some authors have claimed that the deep modelling approach to multi-level modelling entails paradoxes and significant weaknesses. By drawing upon concepts from speech act theory and foundational ontologies these authors argue that hitherto accepted principles for deep modelling should be abandoned and an alternative approach be adopted instead (Eriksson et al., 2013). Objective We investigate the validity of these claims and motivate the need to shift the focus of the debate from philosophical arguments to modelling pragmatics. Method We present each of the main objections raised against deep modelling in turn, classify them according to the kinds of arguments put forward, and analyse the cogency of the supporting justification. We furthermore analyse the counter proposal regarding its pragmatic value for modellers. Results Most of the criticisms against deep modelling are based on mismatches between the premisses used in published definitions of deep modelling and those used by the authors as the basis of their challenges. Hence, most of the criticisms levelled at deep modelling do not actually apply to deep modelling as defined in the literature. We also explain how the proposed alternative introduces new problems of its own, and evaluate its merits from a pragmatic modelling perspective. Finally, we show how deep modelling is indeed compatible with, and can be founded on, classic work in linguistics and logic. Conclusions The inappropriate interpretations of the core principles of deep modelling identified in this article indicate that previous descriptions of them have not had sufficient clarity. We therefore provide further clarification and foundational background material to reduce the chance for future misunderstandings and help establish deep modelling as a solid foundation for multi-level modelling.
© Springer International Publishing Switzerland 2016. The introduction of ontological classification to support domain-metamodeling has been pivotal in the emergence of multi-level modeling as a dynamic research area. However, existing expositions of ontological classification have only used a limited context to distinguish it from the historically more commonly used linguistic classification. In important areas such as domain-specific languages and classic language engineering the distinction can appear to become blurred and the role of ontological classification is obscured, if not fundamentally challenged. In this paper we therefore examine critical points of confusion regarding the distinction and provide an expanded explanation of the differences. We maintain that optimally utilizing ontological classification, even for tasks that traditionally have only been viewed as language engineering, is critical for mastering the challenges in complex systems modeling including the validation of multi-language models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.