This study investigates the validity and instructional value of a rubric developed to evaluate the quality of online courses offered at a midsized public university. This rubric is adapted from an online course quality rubric widely used in higher education, the Quality Matters rubric. We first examine the reliability and preliminary construct validity of the rubric using quality ratings for 202 online courses and eliminate 12 problematic items. We then examine the instructional value of the rubric by investigating causal relationships between: (a) course quality scores, (b) online interactions between students, instructors, and content, and (c) student course performance (course passing rates). A path analysis model, using data from 121 online courses enrolling 5,240 students, show that only rubric items related to learner engagement and interaction have a significant and positive effect on online interactions, while only student-content interaction significantly and positively influence course passing rates.Keywords: online course quality, rubric, online interactions, rubric reliability, rubric validity, quality matters rubric Lee, J.E., Recker, M., & Yuan, M. (2020). The validity and instructional value of a rubric for evaluating online course quality: And empirical study. Online Learning, 24(1), 245-263.The number of college students taking online courses has increased dramatically over the past decade, with almost 31% of U.S. undergraduate students (about 5.2 million) having taken at least one course online as of the 2016 fall semester (McFarland et al., 2018). With this rapid growth in the number of online courses, evaluating their quality has taken on a new urgency. While many approaches have been developed to evaluate online course quality for example, surveys, checklists, observations, peer reviews, and expert reviews-one common way is through quality rubrics