In recent decades, the assessment of instructional quality has grown into a popular and well-funded arm of educational research. The present study contributes to this field by exploring first impressions of untrained raters as an innovative approach of assessment. We apply the thin slice procedure to obtain ratings of instructional quality along the dimensions of cognitive activation, classroom management, and constructive support based on only 30 s of classroom observations. Ratings were compared to the longitudinal data of students taught in the videos to investigate the connections between the brief glimpses into instructional quality and student learning. In addition, we included samples of raters with different backgrounds (university students, middle school students and educational research experts) to understand the differences in thin slice ratings with respect to their predictive power regarding student learning. Results suggest that each group provides reliable ratings, as measured by a high degree of agreement between raters, as well predictive ratings with respect to students' learning. Furthermore, we find experts' and middle school students' ratings of classroom management and constructive support, respectively, explain unique components of variance in student test scores. This incremental validity can be explained with the amount of implicit knowledge (experts) and an attunement to assess specific cues that is attributable to an emotional involvement (students).