CONTEXT Formal qualitative synthesis is the process of pooling qualitative and mixedmethod research data, and then drawing conclusions regarding the collective meaning of the research. Qualitative synthesis is regularly used within systematic reviews in the health professions literature, although such use has been heavily debated in the general literature. This controversy arises in part from the inherent tensions found when generalisations are derived from in-depth studies that are heavily context-dependent. METHODSWe explore three representative qualitative synthesis methodologies: thematic analysis; meta-ethnography, and realist synthesis. These can be understood across two dimensions: integrative to interpretative, and idealist to realist. Three examples are used to illustrate the relative strengths and limitations of these approaches.DISCUSSION Against a backdrop of controversy and diverse methodologies, readers must take a critical stand when reading literature reviews that use qualitative synthesis to derive their findings. We argue that notions of qualitative rigour such as transparency and acknowledgment of the researchers' stance should be applied to qualitative synthesis.research methods
There is a need to improve methods used to assess students’ clinical skills in postgraduate clinical psychology training, but research into the use of new competency‐based assessments are sparse. The current study examines the potential application of the objective structured clinical examination (OSCE) to clinical psychology training. Commonly used in medical training, the OSCE requires students to demonstrate clinical skills across a number of standardised stations with trained actors playing the part of patients. A pilot OSCE was conducted with nine students from a psychology doctoral program. Participants completed a brief student feedback questionnaire and attended a focus group after the OSCE. They also completed the State–Trait Anxiety Inventory before and after the OSCE. The results showed that students viewed the OSCE as a valid, realistic, and fair assessment method. They reported high levels of anxiety during the OSCE but noted that the OSCE was a positive learning experience. These results suggest that OSCEs should be considered in clinical psychology training programs for the assessment of clinical competence. Further evaluations of the validity and reliability of the clinical psychology OSCE using a larger sample size are required. Other limitations and implications of the study are discussed.
Context Research suggests that feedback in the health professions is less useful than we would like. In this paper, we argue that feedback has become reliant on myths that perpetuate unproductive rituals. Feedback often resembles a discrete episode of an educator “telling,” rather than an active and iterative involvement of the learner in a future‐facing process. With this orientation towards past events, it is not surprising that learners become defensive or disengaged when they are reminded of their deficits. Methods We tackle three myths of feedback: (a) feedback needs praise‐criticism balancing rules; (b) feedback is a skill residing within the teacher; and (c) feedback is an input only. For each myth we provide a reframing with supporting examples from the literature. Conclusions Equipping learners to engage in feedback processes may reduce the emotional burden on both parties, rendering techniques such as the feedback sandwich redundant. We also highlight the benefits for learners and teachers of conceptualising feedback as a relational activity, and of tracing the effects of information exchanges. These effects may be immediate or latent, and may manifest in different forms such as changes in learner evaluative judgement or professional identity.
A wide range of technologies has been developed to enhance assessment, but adoption has been inconsistent. This is despite assessment being critical to student learning and certification. To understand why this is the case and how it can be addressed, we need to explore the perspectives of academics responsible for designing and implementing technology‐supported assessment strategies. This paper reports on the experience of designing technology‐supported assessment based on interviews with 33 Australian university teachers. The findings reveal the desire to achieve greater efficiencies and to be contemporary and innovative as key drivers of technology adoption for assessment. Participants sought to shape student behaviors through their designs and made adaptations in response to positive feedback and undesirable outcomes. Many designs required modification because of a lack of appropriate support, leading to compromise and, in some cases, abandonment. These findings highlight the challenges to effective technology‐supported assessment design and demonstrate the difficulties university teachers face when attempting to negotiate mixed messages within institutions and the demands of design work. We use these findings to suggest opportunities to improve support by offering pedagogical guidance and technical help at critical stages of the design process and encouraging an iterative approach to design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.