Ensuring the effectiveness factor of usability consists in ensuring that the application allows users to reach their goals and perform their tasks. One of the few means for reaching this goal relies on task analysis and proving the compatibility between the interactive application and its task models. Synergistic execution enables the validation of a system against its task model by co-executing the system and the task model and comparing the behavior of the system against what is prescribed in the model. This allows a tester to explore scenarios in order to detect deviations between the two behaviors. Manual exploration of scenarios does not guarantee a good coverage of the analysis. To address this, we resort to model-based testing (MBT) techniques to automatically generate scenarios for automated synergistic execution. To achieve this, we generate, from the task model, scenarios to be co-executed over the task model and the system. During this generation step we explore the possibility of including considerations about user error in the analysis. The automation of the execution of the scenarios closes the process. We illustrate the approach with an example.
Ensuring that an interactive application allows users to perform their activities and reach their goals is critical to the overall usability of the interactive application. Indeed, the effectiveness factor of usability directly refers to this capability. Assessing effectiveness is a real challenge for usability testing as usability tests only cover a very limited number of tasks and activities. This paper proposes an approach towards automated testing of effectiveness of interactive applications. To this end we resort to two main elements: an exhaustive description of users' activities and goals using task models, and the generation of scenarios (from the task models) to be tested over the application. However, the number of scenarios can be very high (beyond the computing capabilities of machines) and we might end up testing multiple similar scenarios. In order to overcome these problems, we propose strategies based on task models manipulations (e.g., manipulating task nodes, operator nodes, information ...) resulting in a more intelligent test case generation approach. For each strategy, we investigate its relevance (both in terms of test case generation and in terms of validity compared to the original task models) and we illustrate it with a small example. Finally, the proposed strategies are applied on a real-size case study demonstrating their relevance and validity to test interactive applications. CCS Concepts: • Human-centered computing → HCI design and evaluation methods; Graphical user interfaces; HCI theory, concepts and models; • Software and its engineering → Software testing and debugging; Model-driven software engineering;
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.