Declarative approaches to business process modeling are regarded as well suited for highly volatile environments, as they enable a high degree of flexibility. However, problems in understanding and maintaining declarative process models often impede their adoption. Likewise, little research has been conducted into the understanding of declarative process models. This paper takes a first step toward addressing this fundamental question and reports on an empirical investigation consisting of an exploratory study and a follow-up study focusing on the system analysts' sense-making of declarative process models that are specified in Declare. For this purpose, we distributed real-world Declare models to the participating subjects and asked them to describe the illustrated process and to perform a series of sense-making tasks. The results of our studies indicate that two main strategies for reading Declare models exist: either considering the execution order of the activities in the process model, or orienting by the layout of the process model. In addition, the results indicate that single constraints can be handled well by most subjects, while combinations of constraints pose significant challenges. Moreover, the study revealed that aspects that are similar in both imperative and declarative process modeling languages at a graphical level, while having different semantics, cause considerable troubles. This research not only helps guiding the future development of tools for supporting system analysts, but also gives advice on the design of declarative process modeling notations and points out typical pitfalls to teachers and educators of future systems analysts.
Hierarchy has widely been recognized as a viable approach to deal with the complexity of conceptual models. For instance, in declarative business process models, hierarchy is realized by sub-processes. While technical implementations of declarative sub-processes exist, their application, semantics, and the resulting impact on understandability are less understood yet-this research gap is addressed in this work. More specifically, we discuss the semantics and the application of hierarchy and show how subprocesses enhance the expressiveness of declarative modeling languages. Then, we turn to the influence of hierarchy on the understandability of declarative process models. In particular, we present a cognitive-psychology-based framework that allows to assess the impact of hierarchy on the understandability of a declarative process model. To empirically Communicated by Dr. Selmin Nurcan. test the proposed framework, a combination of quantitative and qualitative research methods is followed. While statistical tests provide numerical evidence, think-aloud protocols give insights into the reasoning processes taking place when reading declarative process models.
Even though considerable progress regarding the technical perspective on modeling and supporting business processes has been achieved, it appears that the human perspective is still often left aside. In particular, we do not have an in-depth understanding of how process models are inspected by humans, what strategies are taken, what challenges arise, and what cognitive processes are involved. This paper contributes toward such an understanding and reports an exploratory study investigating how humans identify and classify quality issues in BPMN process models. Providing preliminary answers to initial research questions, we also indicate other research questions that can be investigated using this approach. Our qualitative analysis shows that humans adapt different strategies on how to identify quality issues. In addition, we observed several challenges appearing when humans inspect process models. Finally, we present different manners in which classification of quality issues was addressed.
Declarative approaches to process modeling are regarded well suited for highly volatile environments as they provide a high degree of flexibility. However, problems in understanding and maintaining declarative process models impede their usage. To compensate for these shortcomings, Test Driven Modeling (TDM) has been proposed. This paper reports on an empirical investigation in which TDM is viewed from two different angles. First, the impact of TDM on communication is explored in a case study. Results indicate that domain experts are inclined to use test cases for communicating with the model builder (system analyst) and prefer them over the process model. The second part of the investigation, a controlled experiment, investigates the impact of TDM on process model maintenance. Data gathered in this experiment indicates that the adoption of test cases significantly lowers cognitive load and increases the perceived quality of changes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.