The original version of the Readiness for Interprofessional Learning Scale (RIPLS) was published by Parsell and Bligh (1999). Three sub-scales with acceptable or high internal consistencies were suggested, however two publications suggested different sub-scales. An investigation into how to improve the reliability for use of the RIPLS instrument with undergraduate health-care students commenced. Content analysis on the original 19 items involving experienced health-care staff resulted in four sub-scales. These sub-scales were then used to formulate a possible model within a structural equation model. The goodness of fit was assessed using a sample (n = 308) of new first year undergraduate students from 8 different health and social care programmes. The same data was fitted to each of the two original sub-scale models suggested by Parsell and Bligh (1999) and the results compared. The fit of the new four sub-scale model appears superior to either of the original models. The new four factor model was then tested on subsequent data (n = 247) obtained from the same students at the end of their first year. The fit was seen to be even better at the end of the academic year.
BackgroundEffective clinical teaching is crucially important for the future of patient care. Robust clinical training therefore is essential to produce physicians capable of delivering high quality health care. Tools used to evaluate medical faculty teaching qualities should be reliable and valid. This study investigates the psychometric properties of modification of the System for Evaluation of Teaching Qualities (SETQ) instrument in the clinical years of undergraduate medical education.MethodsThis cross-sectional multicenter study was conducted in four teaching hospitals in the Kingdom of Bahrain. Two-hundred ninety-eight medical students were invited to evaluate 105 clinical teachers using the SETQ instrument between January 2015 and March 2015. Questionnaire feasibility was analyzed using average time required to complete the form and the number of raters required to produce reliable results. Instrument reliability (stability) was assessed by calculating the Cronbach’s alpha coefficient for the total scale and for each sub-scale (factor). To provide evidence of construct validity, an exploratory factor analysis was conducted to identify which items on the survey belonged together, which were then grouped as factors.ResultsOne-hundred twenty-five medical students completed 1161 evaluations of 105 clinical teachers. The response rates were 42% for student evaluations and 57% for clinical teacher self-evaluations. The factor analysis showed that the questionnaire was composed of six factors, explaining 76.7% of the total variance. Cronbach’s alpha was 0.94 or higher for the six factors in the student survey; for the clinical teacher survey, Cronbach’s alpha was 0.88. In both instruments, the item-total correlation was above 0.40 for all items within their respective scales.ConclusionOur modified SETQ questionnaire was found to be both reliable and valid, and was implemented successfully across various departments and specialties in different hospitals in the Kingdom of Bahrain.Electronic supplementary materialThe online version of this article (doi:10.1186/s12909-017-0893-4) contains supplementary material, which is available to authorized users.
BackgroundThe purpose of this study is to find a reliable method for choosing graduates for a higher-education award. One such method that has achieved notable popularity is known as multisource feedback. Multisource feedback is assessment tool that uses evaluations of different groups and includes both physicians and non-physicians. It is useful for assessing several domains, including professionalism, communication and collaboration, and therefore is a valuable tool for providing a well-rounded selection of the top interns for postsecondary awards. 16 graduates in Royal College of Surgeons in Ireland-Medical University of Bahrain (RCSI Bahrain) responded to an invitation to participate in the student award, which was conducted by the using the multisource feedback process. 5 individuals from different categories (physicians, nurses, and fellow students), rated each participant in this study. A total of 15 individuals were the proposed number for rating. The ratings were calculated using mean and standard deviation, and the award went to the one of the top score out of the 16 participants. Reliability and internal consistency was calculated using Cronbach’s coefficient, and construct validity was evaluated using factor analysis.Results16 graduates participated in the Royal College of Surgeons in Ireland-Bahrain interns’ award based on the multisource feedback process, giving us a 16.5% response rate. The instrument was found to be suitable for factor analysis and showed 3 factor solutions representing 79.3% of the total variance. Reliability analysis using Cronbach’s α reliability of internal consistency indicated that the full scale of the instrument had high internal consistency (Cronbach’s α 0.98).ConclusionThis study confirmed our hypothesis, finding multisource feedback to be a process for choosing the most suitable graduates for interns’ awards that is both reliable and valid. Unfortunately, there were low response rate, which could mean that multisource feedback is not a realistic way to bring most students into the process.
To assess the satisfaction levels of graduates of the Royal College of Surgeons in Ireland University of Bahrain (RCSI Bahrain). The graduate survey was administered to four groups of graduates of the RCSI Bahrain who graduated between the years 2010 and 2014. The graduate survey assessed five major domains and comprised 41 items. The RCSI Bahrain opened its doors in 2004, with the first class graduating in 2010. The graduate cohorts used in this study were working in various countries at the time of survey completion. Out of 599 graduates, 153 responded to the graduate survey. The total mean response rate of the graduate survey was 26 %, including 102 females, 44 males, and 7 students who did not indicate their gender. 49 students graduated in 2012, and 53 students graduated in 2013. Of these graduates, 83 were working in Bahrain at the time of survey administration, 11 in the USA, 4 in Malta, and 3 in the UK; the total number of countries where graduates were working was 14. Reliability analysis found high internal consistency for the instrument (with a Cronbach’s α of 0.97). The whole instrument was found to be suitable for factor analysis (KMO = 0.853; Bartlett test significant, p < 0.00). Factor analysis showed that the data on the questionnaire decomposed into five factors, which accounted for 72.3 % of the total variance: future performance, career development, skills development, graduate as collaborator, and communication skills. The survey results found that graduates of the RCSI Bahrain program who responded to this questionnaire are generally satisfied with their experience at the university, feel well prepared to join the field and feel ready to compete with graduates of competing universities. Furthermore, the graduate survey was found to be a reliable instrument and we provided some evidence to support the construct validity of the instrument.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.