Background It is unlikely that applications of artificial intelligence (AI) will completely replace physicians. However, it is very likely that AI applications will acquire many of their roles and generate new tasks in medical care. To be ready for new roles and tasks, medical students and physicians will need to understand the fundamentals of AI and data science, mathematical concepts, and related ethical and medico-legal issues in addition with the standard medical principles. Nevertheless, there is no valid and reliable instrument available in the literature to measure medical AI readiness. In this study, we have described the development of a valid and reliable psychometric measurement tool for the assessment of the perceived readiness of medical students on AI technologies and its applications in medicine. Methods To define medical students’ required competencies on AI, a diverse set of experts’ opinions were obtained by a qualitative method and were used as a theoretical framework, while creating the item pool of the scale. Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were applied. Results A total of 568 medical students during the EFA phase and 329 medical students during the CFA phase, enrolled in two different public universities in Turkey participated in this study. The initial 27-items finalized with a 22-items scale in a four-factor structure (cognition, ability, vision, and ethics), which explains 50.9% cumulative variance that resulted from the EFA. Cronbach’s alpha reliability coefficient was 0.87. CFA indicated appropriate fit of the four-factor model (χ2/df = 3.81, RMSEA = 0.094, SRMR = 0.057, CFI = 0.938, and NNFI (TLI) = 0.928). These values showed that the four-factor model has construct validity. Conclusions The newly developed Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) was found to be valid and reliable tool for evaluation and monitoring of perceived readiness levels of medical students on AI technologies and applications. Medical schools may follow ‘a physician training perspective that is compatible with AI in medicine’ to their curricula by using MAIRS-MS. This scale could be benefitted by medical and health science education institutions as a valuable curriculum development tool with its learner needs assessment and participants’ end-course perceived readiness opportunities.
Background Artificial intelligence (AI) has affected our day-to-day in a great extent. Healthcare industry is one of the mainstream fields among those and produced a noticeable change in treatment and education. Medical students must comprehend well why AI technologies mediate and frame their decisions on medical issues. Formalizing of instruction on AI concepts can facilitate learners to grasp AI outcomes in association with their sensory perceptions and thinking in the dynamic and ambiguous reality of daily medical practice. The purpose of this study is to provide consensus on the competencies required by medical graduates to be ready for artificial intelligence technologies and possible applications in medicine and reporting the results. Materials and methods A three-round e-Delphi survey was conducted between February 2020 and November 2020. The Delphi panel accorporated experts from different backgrounds; (i) healthcare professionals/ academicians; (ii) computer and data science professionals/ academics; (iii) law and ethics professionals/ academics; and (iv) medical students. Round 1 in the Delphi survey began with exploratory open-ended questions. Responses received in the first round evaluated and refined to a 27-item questionnaire which then sent to the experts to be rated using a 7-point Likert type scale (1: Strongly Disagree—7: Strongly Agree). Similar to the second round, the participants repeated their assessments in the third round by using the second-round analysis. The agreement level and strength of the consensus was decided based on third phase results. Median scores was used to calculate the agreement level and the interquartile range (IQR) was used for determining the strength of the consensus. Results Among 128 invitees, a total of 94 agreed to become members of the expert panel. Of them 75 (79.8%) completed the Round 1 questionnaire, 69/75 (92.0%) completed the Round 2 and 60/69 (87.0%) responded to the Round 3. There was a strong agreement on the 23 items and weak agreement on the 4 items. Conclusions This study has provided a consensus list of the competencies required by the medical graduates to be ready for AI implications that would bring new perspectives to medical education curricula. The unique feature of the current research is providing a guiding role in integrating AI into curriculum processes, syllabus content and training of medical students.
Background and purpose: Due to the COVID-19 pandemic, scientific congresses are increasingly being organized as virtual congresses (VCs). In May 2020, the European Academy of Neurology (EAN) held a VC, free of charge. In the absence of systematic studies on this topic, the aim of this study is to evaluate the attendance and perceived quality of the 2020 EAN VC compared to the 2019 EAN face-to-face congress (FFC).Methods: An analysis of the demographic data of participants obtained from the online registration was done. A comparison of the two congresses based on a survey with questions on the perception of speakers' performance, quality of networking and other aspects was made.Results: Of 43,596 registered participants, 20,694 active participants attended the VC.Compared to 2019, the number of participants tripled (6916 in 2019) and the cumulated number of participants attending the sessions was five times higher (169,334 in 2020 vs. 33,024 in 2019). Out of active participants 55% were from outside Europe, 42% were board-certified neurologists (FFC 80%) and 21% were students (FFC 0.6%). The content of the congress was evaluated as 'above expectation' by 56% of the attendees (FFC 41%).
Instructive case-based exams and the following case discussions seemed a high potential and motivating teaching tool in the clinical problem solving domain for 6th year students.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.