Electronic health records (eHRs) play an increasingly important role in documentation and exchange of information in multi-and interdisciplinary patient care. Although eHRs are associated with mixed evidence in terms of effectiveness, they are undeniably the health record form of the future. This poses several learning opportunities and challenges for medical education. This review aims to connect the concept of eHRs to key competencies of physicians and elaborates current learning science perspectives on diagnostic and clinical reasoning based on a theoretical framework of scientific reasoning and argumentation. It concludes with an integrative vision of the use of eHRs, and the special role of the patient, for teaching and learning in medicine.
Peer-feedback efficiency might be influenced by the oftentimes voiced concern of students that they perceive their peers' competence to provide feedback as inadequate. Feedback literature also identifies mindful processing of (peer)feedback and (peer)feedback content as important for its efficiency, but lacks systematic investigation. In a 2 × 2 factorial design, peer-feedback content (concise general feedback [CGF] vs. elaborated specific feedback [ESF]) and competence of the sender (high vs. low) were varied. Students received a scenario containing an essay by a fictional student and fictional peer feedback, a perception questionnaire, and a text revision, distraction, and peer-feedback recall task. Eye tracking was applied to measure how written peer feedback was (re-)read, e.g., glance duration on exact words and sentences. Mindful cognitive processing was inferred from the relation between glance duration and (a) text-revision performance and (b) peer-feedback recall performance. Feedback by a high competent peer was perceived as more adequate. Compared to CGF, participants who received ESF scored higher on positive affect towards the peer feedback. No effects were found for peer-feedback content and/or sender's competence level on
Advancements in artificial intelligence are rapidly increasing. The new‐generation large language models, such as ChatGPT and GPT‐4, bear the potential to transform educational approaches, such as peer‐feedback. To investigate peer‐feedback at the intersection of natural language processing (NLP) and educational research, this paper suggests a cross‐disciplinary framework that aims to facilitate the development of NLP‐based adaptive measures for supporting peer‐feedback processes in digital learning environments. To conceptualize this process, we introduce a peer‐feedback process model, which describes learners' activities and textual products. Further, we introduce a terminological and procedural scheme that facilitates systematically deriving measures to foster the peer‐feedback process and how NLP may enhance the adaptivity of such learning support. Building on prior research on education and NLP, we apply this scheme to all learner activities of the peer‐feedback process model to exemplify a range of NLP‐based adaptive support measures. We also discuss the current challenges and suggest directions for future cross‐disciplinary research on the effectiveness and other dimensions of NLP‐based adaptive support for peer‐feedback. Building on our suggested framework, future research and collaborations at the intersection of education and NLP can innovate peer‐feedback in digital learning environments.
Practitioner notesWhat is already known about this topic
There is considerable research in educational science on peer‐feedback processes.
Natural language processing facilitates the analysis of students' textual data.
There is a lack of systematic orientation regarding which NLP techniques can be applied to which data to effectively support the peer‐feedback process.
What this paper adds
A comprehensive overview model that describes the relevant activities and products in the peer‐feedback process.
A terminological and procedural scheme for designing NLP‐based adaptive support measures.
An application of this scheme to the peer‐feedback process results in exemplifying the use cases of how NLP may be employed to support each learner activity during peer‐feedback.
Implications for practice and/or policy
To boost the effectiveness of their peer‐feedback scenarios, instructors and instructional designers should identify relevant leverage points, corresponding support measures, adaptation targets and automation goals based on theory and empirical findings.
Management and IT departments of higher education institutions should strive to provide digital tools based on modern NLP models and integrate them into the respective learning management systems; those tools should help in translating the automation goals requested by their instructors into prediction targets, take relevant data as input and allow for evaluating the predictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.