Background Electronic learning (e-learning) in postgraduate medical education has seen a rapid evolution; however, we tend to evaluate it only on its primary outcome or learning aim, whereas its effectiveness also depends on its instructional design. We believe it is important to have an overview of all the methods currently used to evaluate e-learning design so that the preferred method may be identified and the next steps needed to continue to evaluate postgraduate medical e-learning may be outlined. Objective This study aimed to identify and compare the outcomes and methods used to evaluate postgraduate medical e-learning. Methods We performed a systematic literature review using the Web of Science, PubMed, Education Resources Information Center, and Cumulative Index of Nursing and Allied Health Literature databases. Studies that used postgraduates as participants and evaluated any form of e-learning were included. Studies without any evaluation outcome (eg, just a description of e-learning) were excluded. Results The initial search identified 5973 articles, of which we used 418 for our analysis. The types of studies were trials, prospective cohorts, case reports, and reviews. The primary outcomes of the included studies were knowledge, skills, and attitude. A total of 12 instruments were used to evaluate a specific primary outcome, such as laparoscopic skills or stress related to training. The secondary outcomes mainly evaluated satisfaction, motivation, efficiency, and usefulness. We found 13 e-learning design methods across 19 studies (4% 19/418). The methods evaluated usability, motivational characteristics, and the use of learning styles or were based on instructional design theories, such as Gagne’s instructional design, the Heidelberg inventory, Kern’s curriculum development steps, and a scale based on the cognitive load theory. Finally, 2 instruments attempted to evaluate several aspects of a design, based on the experience of creating e-learning. Conclusions Evaluating the effect of e-learning design is complicated. Given the diversity of e-learning methods, there are many ways to carry out such an evaluation, and probably, many ways to do so correctly. However, the current literature shows us that we have yet to reach any form of consensus about which indicators to evaluate. There is a great need for an evaluation tool that is properly constructed, validated, and tested. This could be a more homogeneous way to compare the effects of e-learning and for the authors of e-learning to continue to improve their product.
BACKGROUND Electronic learning (e-learning) in postgraduate medical education has seen a rapid evolution; however, we tend to evaluate it only on its primary outcome or learning aim, whereas its effectiveness also depends on its instructional design. We believe it is important to have an overview of all the methods currently used to evaluate e-learning design so that the preferred method may be identified and the next steps needed to continue to evaluate postgraduate medical e-learning may be outlined. OBJECTIVE This study aimed to identify and compare the outcomes and methods used to evaluate postgraduate medical e-learning. METHODS We performed a systematic literature review using the Web of Science, PubMed, Education Resources Information Center, and Cumulative Index of Nursing and Allied Health Literature databases. Studies that used postgraduates as participants and evaluated any form of e-learning were included. Studies without any evaluation outcome (eg, just a description of e-learning) were excluded. RESULTS The initial search identified 5973 articles, of which we used 418 for our analysis. The types of studies were trials, prospective cohorts, case reports, and reviews. The primary outcomes of the included studies were knowledge, skills, and attitude. A total of 12 instruments were used to evaluate a specific primary outcome, such as laparoscopic skills or stress related to training. The secondary outcomes mainly evaluated satisfaction, motivation, efficiency, and usefulness. We found 13 e-learning design methods across 19 studies (4% 19/418). The methods evaluated usability, motivational characteristics, and the use of learning styles or were based on instructional design theories, such as Gagne’s instructional design, the Heidelberg inventory, Kern’s curriculum development steps, and a scale based on the cognitive load theory. Finally, 2 instruments attempted to evaluate several aspects of a design, based on the experience of creating e-learning. CONCLUSIONS Evaluating the effect of e-learning design is complicated. Given the diversity of e-learning methods, there are many ways to carry out such an evaluation, and probably, many ways to do so correctly. However, the current literature shows us that we have yet to reach any form of consensus about which indicators to evaluate. There is a great need for an evaluation tool that is properly constructed, validated, and tested. This could be a more homogeneous way to compare the effects of e-learning and for the authors of e-learning to continue to improve their product.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.