The aims of this research were to (1) develop and validate the computerized dynamic assessment for reading literacy and (2) study the effects of different types of prompting of the computerized dynamic assessment on reading literacy, growth of learning, and learning potential. The pilot samples were a total of 802 ninth-grade students in six secondary schools in Bangkok. The data for the main study was gathered from 541 ninth-grade students in eleven secondary schools in Bangkok. A quasi-experimental design was adopted. Research instruments were reading literacy tests for computerized dynamic assessment. The data were analyzed as follows: 1) ANCOVA statistics for reading literacy performance, 2) latent growth modeling for growth of learning, and 3) mixed ANOVA and MANOVA statistics for learning potential. Findings were presented as follows: 1. This research developed computerized dynamic assessment for reading literacy, which was interactive and online feedback program. The tested contents were composed of three dimensions of reading literacy, including 1) locate information, 2) understand, and 3) evaluate and reflect. Three parallel tests were assessed at three different time points, each test comprised twenty two-tier items. For the psychometric properties of the instrument, most items for all reading literacy test forms were appropriate for content validity and parallelism. For the model comparison, the bifactor MIRT model was the best-fitting model. A majority of the items had good multidimensional discrimination values. The multidimensional difficulty estimates were in the acceptable range for most items. Moreover, the instruments yielded highly internal reliability. Regarding the statistical parallelism of the test forms, they showed satisfactory conformity at the test forms and item-by-item levels. 2. For the results of reading literacy performance, different types of promptings of computerized dynamic assessment had a significant effect on students’ reading posttest scores. Verification prompting group received significantly lower posttest score than other prompting-based groups. For the growth of learning, mixed prompting group obtained the highest rate of growth in reading literacy than other groups. For the results of the associations of reading literacy subscales, the growth rate in one subscale did not relate to the growth rate in other subscales, except mixed prompting group that the growth rate in understand was associated with the growth rate in evaluate and reflect. In terms of learning potential, the results of the availability score revealed that there were significant differences in availability scores measured at the third testing session among groups with different types of promptings. Verification prompting group had a significantly higher availability score when compared with mixed prompting group. For the mediated score, the result showed that there were no significant two-way interactions between prompting conditions and time on the mediated score. However, there was a significant main effect of prompting conditions. Verification prompting group had a significantly lower mediated score when compared with other groups. For the levels of prompting, the result showed that there were significant differences in first level of prompting, second level of prompting, third level of prompting, and fourth level of prompting among groups with different types of promptings. Verification prompting obtained significantly higher assistance than mixed prompting group in all levels of prompting.