Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Computer-based simulation is utilised across various educational fields, employing diverse technologies to facilitate practical understanding of content and the acquisition of skills that can help close the gap between theory and practice. The possibility of providing scenarios that resemble on-the-job tasks, enables instructors to both train and assess the trainee’s comprehension of the tasks at hand. The practices as well as the technologies for the assessment of simulation-based training vary across disciplines. Our motivation is to address quality procedures from a cross-discipline perspective. There seems to be a lack of scientific investigation that takes one step back from the specific application and investigates how assessment instruments can be developed to fit training outcomes regardless of the professional discipline. This scoping literature review on empirical studies aims to do so by exploring how competency is assessed with computer-based simulation. Objectives to achieve this are: (1) apply established training research theory to structure a decomposition of assessment instruments; to (2) review approaches to assessments factored over this structure; and (3) discuss quality procedures taken in the creation of the reported instruments and then propose an approach to assessment instrumentation that can be applied independent of discipline, with the range of current technology, and for any focal outcome competency. By reviewing a spectrum of fields, we capture reported assessment practices across a range of currently employed technologies. This literature review combines the methods of a scoping review with the qualities of a systematic literature review while keeping to conventional reporting guidelines. This allowed us to provide insight into current approaches and research designs that applied measurements in the range from automated assessment to observer rating of simulation-based training in professional work settings. This study found that all reviewed studies measured skill-based outcomes with some variation and that there is more theoretical and empirical work to be done to close the gap on quality instrumentation and its validity evidence. Our contribution to the field of training research is the operationalized component structure and the synthesised approach to instrumentation that could offer researchers and practitioners guidance and inspiration to develop and conduct quality assessments in competency development.
Computer-based simulation is utilised across various educational fields, employing diverse technologies to facilitate practical understanding of content and the acquisition of skills that can help close the gap between theory and practice. The possibility of providing scenarios that resemble on-the-job tasks, enables instructors to both train and assess the trainee’s comprehension of the tasks at hand. The practices as well as the technologies for the assessment of simulation-based training vary across disciplines. Our motivation is to address quality procedures from a cross-discipline perspective. There seems to be a lack of scientific investigation that takes one step back from the specific application and investigates how assessment instruments can be developed to fit training outcomes regardless of the professional discipline. This scoping literature review on empirical studies aims to do so by exploring how competency is assessed with computer-based simulation. Objectives to achieve this are: (1) apply established training research theory to structure a decomposition of assessment instruments; to (2) review approaches to assessments factored over this structure; and (3) discuss quality procedures taken in the creation of the reported instruments and then propose an approach to assessment instrumentation that can be applied independent of discipline, with the range of current technology, and for any focal outcome competency. By reviewing a spectrum of fields, we capture reported assessment practices across a range of currently employed technologies. This literature review combines the methods of a scoping review with the qualities of a systematic literature review while keeping to conventional reporting guidelines. This allowed us to provide insight into current approaches and research designs that applied measurements in the range from automated assessment to observer rating of simulation-based training in professional work settings. This study found that all reviewed studies measured skill-based outcomes with some variation and that there is more theoretical and empirical work to be done to close the gap on quality instrumentation and its validity evidence. Our contribution to the field of training research is the operationalized component structure and the synthesised approach to instrumentation that could offer researchers and practitioners guidance and inspiration to develop and conduct quality assessments in competency development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.