Embracing alternative epistemological and methodological approaches for studying visual expertise can lead to a more balanced and robust future for understanding superior visual skills in medical image diagnosis as well as other medical fields.
Lack of published studies on students’ practice behaviour of physical examination skills outside timetabled training sessions inspired this study into what activities medical students undertake to improve their skills and factors influencing this. Six focus groups of a total of 52 students from Years 1–3 using a pre-established interview guide. Interviews were recorded, transcribed and analyzed using qualitative methods. The interview guide was based on questionnaire results; overall response rate for Years 1–3 was 90% (n = 875). Students report a variety of activities to improve their physical examination skills. On average, students devote 20% of self-study time to skill training with Year 1 students practising significantly more than Year 3 students. Practice patterns shift from just-in-time learning to a longitudinal selfdirected approach. Factors influencing this change are assessment methods and simulated/real patients. Learning resources used include textbooks, examination guidelines, scientific articles, the Internet, videos/DVDs and scoring forms from previous OSCEs. Practising skills on fellow students happens at university rooms or at home. Also family and friends were mentioned to help. Simulated/real patients stimulated students to practise of physical examination skills, initially causing confusion and anxiety about skill performance but leading to increased feelings of competence. Difficult or enjoyable skills stimulate students to practise. The strategies students adopt to master physical examination skills outside timetabled training sessions are self-directed. OSCE assessment does have influence, but learning takes place also when there is no upcoming assessment. Simulated and real patients provide strong incentives to work on skills. Early patient contacts make students feel more prepared for clinical practice.
Educators in medical image interpretation have difficulty finding scientific evidence as to how they should design their instruction. We review and comment on 81 papers that investigated instructional design in medical image interpretation. We distinguish between studies that evaluated complete offline courses and curricula, studies that evaluated e-learning modules, and studies that evaluated specific educational interventions. Twenty-three percent of all studies evaluated the implementation of complete courses or curricula, and 44% of the studies evaluated the implementation of e-learning modules. We argue that these studies have encouraging results but provide little information for educators: too many differences exist between conditions to unambiguously attribute the learning effects to specific instructional techniques. Moreover, concepts are not uniformly defined and methodological weaknesses further limit the usefulness of evidence provided by these studies. Thirty-two percent of the studies evaluated a specific interventional technique. We discuss three theoretical frameworks that informed these studies: diagnostic reasoning, cognitive schemas and study strategies. Research on diagnostic reasoning suggests teaching students to start with non-analytic reasoning and subsequently applying analytic reasoning, but little is known on how to train non-analytic reasoning. Research on cognitive schemas investigated activities that help the development of appropriate cognitive schemas. Finally, research on study strategies supports the effectiveness of practice testing, but more study strategies could be applicable to learning medical image interpretation. Our commentary highlights the value of evaluating specific instructional techniques, but further evidence is required to optimally inform educators in medical image interpretation.
Both training types showed similar increases in sensitivity. Therefore, it might be advisable to pay less attention to systematic viewing and more attention to content, such as the radiologic appearances of diseases.
Objectives Structured reporting (SR) in radiology reporting is suggested to be a promising tool in clinical practice. In order to implement such an emerging innovation, it is necessary to verify that radiology reporting can benefit from SR. Therefore, the purpose of this systematic review is to explore the level of evidence of structured reporting in radiology. Additionally, this review provides an overview on the current status of SR in radiology. Methods A narrative systematic review was conducted, searching PubMed, Embase, and the Cochrane Library using the syntax ‘radiol*’ AND ‘structur*’ AND ‘report*’. Structured reporting was divided in SR level 1, structured layout (use of templates and checklists), and SR level 2, structured content (a drop-down menu, point-and-click or clickable decision trees). Two reviewers screened the search results and included all quantitative experimental studies that discussed SR in radiology. A thematic analysis was performed to appraise the evidence level. Results The search resulted in 63 relevant full text articles out of a total of 8561 articles. Thematic analysis resulted in 44 SR level 1 and 19 level 2 reports. Only one paper was scored as highest level of evidence, which concerned a double cohort study with randomized trial design. Conclusion The level of evidence for implementing SR in radiology is still low and outcomes should be interpreted with caution. Key Points • Structured reporting is increasingly being used in radiology, especially in abdominal and neuroradiological CT and MRI reports. • SR can be subdivided into structured layout (SR level 1) and structured content (SR level 2), in which the first is defined as being a template in which the reporter has to report; the latter is an IT-based manner in which the content of the radiology report can be inserted and displayed into the report. • Despite the extensive amount of research on the subject of structured reporting, the level of evidence is low.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.