Curriculum-based measurement of reading (CBM-R) is a common assessment educators use to monitor student growth in broad reading skills and evaluate the effectiveness of instructional programs. Computer-adaptive tests (CATs), such as Star Reading, have been cited as a viable option to formatively assess reading growth. We used Bayesian multivariate-multilevel modeling to compare growth measured via concurrently collected CBM-R and CAT data for 3,192 students across Grades 1 ( n = 298), 2 ( n = 1,149), 3 ( n = 1,062), 4 ( n = 462), and 5 ( n = 221). After standardizing outcomes, the average rate of weekly improvement on each measure was highly similar across grade levels. Between-student variability for growth on each assessment was highly similar across grade levels. The magnitude of residual variance, or error, differed markedly between assessments. In Grades 1 to 3, Star Reading yielded less precise estimates of growth relative to CBM-R, and the opposite results were observed in Grade 5.