Our review has allowed us to identify common underlying conceptualisations of observed rater mechanisms and subsequently propose a comprehensive, although complex, framework for the dynamic and contextual nature of the rating process. This framework could help bridge the gap between researchers adopting different perspectives when studying rater cognition and enable the interpretation of contradictory findings of raters' performance by determining which mechanism is enabled or disabled in any given context.
The present study shows the beneficial influence of generating self-explanations when dealing with less familiar clinical contexts. Generating self-explanations without feedback resulted in better diagnostic performance than in the control group at 1 week after the intervention.
Validity is one of the most debated constructs in our field; debates abound about what is legitimate and what is not, and the word continues to be used in ways that are explicitly disavowed by current practice guidelines. The resultant tensions have not been well characterized, yet their existence suggests that different uses may maintain some value for the user that needs to be better understood. We conducted an empirical form of Discourse Analysis to document the multiple ways in which validity is described, understood, and used in the health professions education field. We created and analyzed an archive of texts identified from multiple sources, including formal databases such as PubMED, ERIC and PsycINFO as well as the authors' personal assessment libraries. An iterative analytic process was used to identify, discuss, and characterize emerging discourses about validity. Three discourses of validity were identified. Validity as a test characteristic is underpinned by the notion that validity is an intrinsic property of a tool and could, therefore, be seen as content and context independent. Validity as an argument-based evidentiary-chain emphasizes the importance of supporting the interpretation of assessment results with ongoing analysis such that validity does not belong to the tool/instrument itself. The emphasis is on process-based validation (emphasizing the journey instead of the goal). Validity as a social imperative foregrounds the consequences of assessment at the individual and societal levels, be they positive or negative. The existence of different discourses may explain-in part-results observed in recent systematic reviews that highlighted discrepancies and tensions between recommendations for practice and the validation practices that are actually adopted and reported. Some of these practices, despite contravening accepted validation 'guidelines', may nevertheless respond to different and somewhat unarticulated needs within health professional education.
Testing has been shown to enhance retention of learned information beyond simple studying, a phenomena known as test-enhanced learning (TEL). Research has shown that TEL effects are greater for tests that require the production of responses [e.g., short-answer questions (SAQs)] relative to tests that require the recognition of correct answers [e.g., multiple-choice questions (MCQs)]. High stakes licensure examinations have recently differentiated MCQs that require the application of clinical knowledge (context-rich MCQs) from MCQs that rely on the recognition of "facts" (context-free MCQs). The present study investigated the influence of different types of educational activities (including studying, SAQs, context-rich MCQs and context-free MCQs) on later performance on a mock licensure examination. Fourth-year medical students (n = 224) from four Quebec universities completed four educational activities: one reading-based activity and three quiz-based activities (SAQs, context-rich MCQs, and context-free MCQs). We assessed the influence of the type of educational activity on students' subsequent performance in a mock licensure examination, which consisted of two types of context-rich MCQs: (1) verbatim replications of previous items and (2) items that tested the same learning objective but were new. Mean accuracy scores on the mock licensure exam were higher when intervening educational activities contained either context-rich MCQs (Mean z-score = 0.40) or SAQs (M = 0.39) compared to context-free MCQs (M = -0.38) or study only items (M = -0.42; all p < 0.001). Higher mean scores were only present for verbatim items (p < 0.001). The benefit of testing was observed when intervening educational activities required either the generation of a response (SAQs) or the application of knowledge (context-rich MCQs); however, this effect was only observed for verbatim test items. These data provide evidence that context-rich MCQs and SAQs enhance learning through testing compared to context-free MCQs or studying alone. The extent to which these findings generalize beyond verbatim questions remains to be seen.
OBJECTIVE General guidelines for teaching clinical reasoning have received much attention, despite a paucity of instructional approaches with demonstrated effectiveness. As suggested in a recent experimental study, self-explanation while solving clinical cases may be an effective strategy to foster reasoning in clinical clerks dealing with less familiar cases. However, the mechanisms that mediate this benefit have not been specifically investigated. The aim of this study was to explore the types of knowledge used by students when solving familiar and less familiar clinical cases with self-explanation.METHODS In a previous study, 36 third-year medical students diagnosed familiar and less familiar clinical cases either by engaging in self-explanation or not. Based on an analysis of previously collected data, the present study compared the content of self-explanation protocols generated by seven randomly selected students while solving four familiar and four less familiar cases. In total, 56 verbal protocols (28 familiar and 28 less familiar) were segmented and coded using the following categories: paraphrases, biomedical inferences, clinical inferences, monitoring statements and errors.RESULTS Students provided more self-explanation segments from less familiar cases (M = 275.29) than from familiar cases (M = 248.71, p = 0.046). They provided significantly more paraphrases (p = 0.001) and made more errors (p = 0.008). A significant interaction was found between familiarity and the type of inferences (biomedical versus clinical, p = 0.016). When self-explaining less familiar cases, students provided significantly more biomedical inferences than familiar cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.