Surprisingly, the students' self-reports indicated that many students performed procedures on patients infrequently. As expected, for most procedures there was a significant association between frequency of performance and self-assessed competency.
This study evaluated the effectiveness of the Medical College of Georgia School of Medicine's Summer Prematriculation Program in facilitating participants' first-year achievement and retention from 1980 to 1989. The four-week curriculum included biochemistry, anatomy, immunology, learning skills, medical or anatomy terminology, and clinical forums. Of 101 black and 96 other nontraditional, at-risk students who had been invited to attend the program, the 115 attendees were compared with the 82 nonattendees on a variety of demographic and academic measures. For all measures examined, no statistically significant difference was found between the two groups. Several factors may have obscured the results. However, there were indications that the program had a favorable effect, including the evaluation by over 90% of the attendees that the program had contributed positively to their adjustment to medical school.
A questionnaire on the interview process used in selecting first-year medical students was sent to all 123 U.S. medical schools. The questionnaire included items on the role of the interview in selection. the nature of the interview process, and how the interview was administered. The results revealed that 106 of the 107 responding schools used an interview in selecting students. Among four selection factors, the interview ranked second in importance to gradepoint average; references and Medical College Admission Test (MCAT) scores were of lesser importance. Only 8 percent of the institutions used a selection index for admission processing that assigned the interview a specific percentage weight. The interviews generally used a one-to-one format, and most applicants had at least two separate interviews. All of the schools used faculty and staff members as interviewers, and 72 percent of the schools used students also. Eighty percent of the schools made certain that each interviewee was interviewed by at least one member of the admissions committee, and all interviewers were admissions committee members at more than half of the schools. Items on interview organization revealed that most of the schools trained or briefed their interviewers and required a written interview report for the applicant's file. About three-fourths of the schools held interviews only on their campus. About one-third of the schools did follow-up studies on the effectiveness of interviews in predicting success. Most schools had a geographical preference area for applicants, and on the average the schools interviewed 69 percent of the applicants from their m a . (8 ref)-
To identify factors that influence students to choose primary care or non-primary care specialties, the authors surveyed the 509 graduating students at the Medical College of Georgia School of Medicine in 1988, 1989, and 1990. Using a Likert-type scale, the 404 responding students rated potential influences on their specialty choices from 1, very negative, to 7, very positive. The students choosing primary care specialties were positively influenced significantly more often by their desire to keep options open (85% versus 58%, p less than .001) and their desire for longitudinal patient care opportunities (95% versus 54%, p less than .001). Those choosing non-primary care specialties were more often influenced by their desire for monetary rewards (69% versus 35%, p less than .001) and by their perceptions of lifestyle following residency (74% versus 60%, p less than .01) and prestige of the specialty (57% versus 36%, p less than .001). The authors used multiple discriminant analysis to derive a discriminant function that would permit classification of students into primary care and non-primary care groups. The potential influences of desire for longitudinal care opportunities and desire for monetary rewards were statistically and clinically significant for all three years. Using the discriminant function, the authors correctly classified 81%, 79%, and 78% of the students' specialty choices for 1988, 1989, and 1990, respectively. The authors suggest that addressing the issue of monetary rewards will be necessary before the primary care fields again become attractive to students.
This study addressed the questions of whether medical students' cumulative grade-point averages (GPAs) correlate with the performance assessments (overall and in specific areas of competency) that they receive as interns from their internship program directors, and whether the students' self-assessments of preparedness for internship correlate with their internship directors' overall assessments. A questionnaire to assess interns' competencies was developed and sent to the directors of the internship programs of the 283 1990 and 1991 graduates of the Medical College of Georgia School of Medicine who consented to participate in the study (82% of the graduates). Eighty percent of the program directors responded. A similar questionnaire was sent to all 342 of the 1990 and 1991 graduates; 38% provided self-assessments of their competencies and also stated their views on how well prepared they were for their internships. Considering the classes as a group, the mean ratings of the interns' overall competencies by the program directors ranged from 3.7 to 4.3 on a five-point Likert scale (1, unsatisfactory, to 5, outstanding), whereas the interns' ratings of how well they were prepared for their internships (that is, their sense of overall competency) were somewhat lower, ranging from 3.4 to 4.0. The correlations of GPAs with the specific areas of competencies ranged from .28 to .51. The correlation between the mean ratings of the program directors and the mean self-ratings of the interns was .58. The data support the conclusions that medical school academic performance relates significantly to performance in internship and that interns do not rate themselves as highly as their program directors do.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.