Numerous instructional design models have been proposed over the past several decades. Instead of focusing on the design process (means), this study investigated how learners perceived the quality of instruction they experienced (ends). An electronic survey instrument containing nine a priori scales was developed. Students responded from 89 different undergraduate and graduate courses at multiple institutions (n = 140). Data analysis indicated strong correlations between student self-reports on academic learning time, how much they learned, First Principles of Instruction, their satisfaction with the course, perceptions of their mastery of course objectives, and global course ratings. Most importantly, these scales measure principles with which instructional developers and teachers can evaluate their products and courses, regardless of design processes used: provide authentic tasks for students to do; activate prior learning; demonstrate what is to be learned; provide repeated opportunities for students to successfully complete authentic tasks with coaching and feedback; and help students integrate what they have learned into their personal lives.
Recent research has touted the benefits of learner-centered instruction, problembased learning, and a focus on complex learning. Instructors often struggle to put these goals into practice as well as to measure the effectiveness of these new teaching strategies in terms of mastery of course objectives. Enter the course evaluation, often a standardized tool that yields little practical information for an instructor, but is nonetheless utilized in making high-level career decisions, such as tenure and monetary awards to faculty.The present researchers have developed a new instrument to measure teaching and learning quality (TALQ). In a study of 464 students in 12 courses, if students agreed that they experienced academic learning time (ALT) and that their instructors used First Principles of Instruction, then students were nearly 4 times more likely achieve high levels of mastery of course objectives, according to independent instructor assessments.TALQ can measure improvements in use of First Principles in teaching and course design. The feedback from this instrument can assist teachers who wish to implement the recommendation made by Kuh et al. (2006) that universities and colleges should focus their assessment efforts on factors that influence student success.
Purpose: To assess the value of the Physician Assistant Education Association's End of Curriculum exam TM and formative and summative exams during the physician assistant program in predicting Physician Assistant National Certifying Exam (PANCE) scores. Methods: Value of the End of Curriculum exam, Physician Assistant Clinical Knowledge Rating and Assessment Tool (PACKRAT I, PACKRAT II), PANCE simulation (SUMM I), and Objective Structured Clinical Examination in predicting future PANCE scores was assessed using correlation and regression analysis of data for 27 PA students from one cohort. Results: The End of Curriculum exam, PACKRAT I, PACK-RAT II, and SUMM I are statistically significant predictors of PANCE score (p < 0.01). A combination of PACKRAT I and PACKRAT II was the best predictor of PANCE score and explained a large amount of variance (77.0%) in PANCE scores. Conclusion: PAEA's End of Curriculum exam is one of the strongest predictors of PANCE score (r = 0.78). It offers an additional opportunity for programs to provide PA students with another layer of academic advising and to guide their preparation for PANCE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.