Although generic usability heuristics lists have been popular with researchers and practitioners, emerging new technologies have called for more specific heuristics. One of these heuristics was proposed by Sutcliffe and Gault in 2004 [37]. This paper examines research which has cited these heuristics with the aim to see how it has been exploited. The results showed that a fifth of the papers citing the heuristics have used the heuristics fully or partly, and that researchers have adapted it to their current needs. Following this result we proposed that a patchwork of heuristics might be more useful than a single list. We evaluated a crisis management training simulator using the virtual reality heuristics and discussed how the outcome of the evaluation fitted the patchwork.
Consolidating usability problems from problem lists from several users can be a cognitively demanding task for evaluators. It has been suggested that collaboration between evaluators can help this process. In an attempt to learn how evaluators make decisions in this process, we studied what justification evaluators give for extracting usability problems and their consolidation when working both individually and collaboratively. An experiment with eight novice usability evaluators was carried out where they extracted usability problems and consolidated them individually and then collaboratively. The data were analysed by using conventional content analysis and by creating argumentation models according to the Toulmin model. The results showed that during usability problem extraction novice usability evaluators could put forward warrants leading to clear claims when probed, but seldom added qualifiers or rebuttals. Novice usability evaluators could identify predefined criteria for a usability problem when probed and this could be acknowledged as a backing to warrants. In the individual settings, novice evaluators had difficulty in presenting claims and warrants for their decisions on consolidation. Although further study is needed, the results of the study indicated that collaborating pairs had a tendency to argue slightly better than individuals. Through the experiment novice evaluators' reasoning patterns during problem extraction and consolidation as well as during their assessment of severity and confidence could be identified.
Voice communication is vital for collaboration between first responders and commanders during crisis management. To decrease cost, training can take place in a virtual environment instead of in a real one. It is non-trivial to build and evaluate a virtual environment for training complex command. To understand the method-resources required for evaluating a training simulator for crisis response, this paper presents a case study of applying several resources. Method-resources were analysed for usability problems and Mechanics of Collaboration (MOC). The results show that the Group Observational Technique and the MOC analysis are appropriate for analysing factors of collaboration and communication. The think-aloud technique, observers, experts in the domain and advanced task scenario were important resources. In only a few cases sound and video were necessary to analyse issues.
Designing feedback that trainees receive in a training simulator while practicing non-technical skills in complex cognitive domains is demanding but, though potentially productive, has received inadequate attention. This paper describes research which aims to understand the impact of fidelity on feedback provided during training for crisis management. More specifically, the goal was to learn whether there were differences between learning feedback types in three different environments, a real-life training exercise, a table-top exercise and a design of an experiential training simulator. The basis for the comparison was a framework of essential feedback types that emerged from the literature and three types of fidelities, physical, functional and psychological. The study showed that there were few occurrences of psychological fidelities of feedback. It also showed that high fidelity can be achieved in the absence of feedback forms categorized as psychological, and that loose organization of an exercise may lead to significant variation in learning outcomes in different learning environments. In addition, the research demonstrated how the fidelity analysis of feedback types can be useful for designing feedback for learners in a training simulator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright 漏 2025 scite LLC. All rights reserved.
Made with 馃挋 for researchers
Part of the Research Solutions Family.