The paper explores the accuracy of feedback provided to non-native learners of English by a pronunciation module included in Microsoft Reading Progress. We compared pronunciation assessment offered by Reading Progress against two university pronunciation teachers. Recordings from students of English who aim for native-like pronunciation were assessed independently by Reading Progress and the human raters. The output was standardized as negative binary feedback assigned to orthographic words, which matches the Microsoft format. Our results indicate that Reading Progress is not yet ready to be used as a CAPT tool. Inter-rater reliability analysis showed a moderate level of agreement for all raters and a good level of agreement upon eliminating feedback from Reading Progress. Meanwhile, the qualitative analysis revealed certain problems, notably false positives, i.e., words pronounced within the boundaries of academic pronunciation standards, but still marked as incorrect by the digital rater. We recommend that EFL teachers and researchers approach the current version of Reading Progress with caution, especially as regards automated feedback. However, its design may still be useful for manual feedback. Given Microsoft declarations that Reading Progress would be developed to include more accents, it has the potential to evolve into a fully-functional CAPT tool for EFL pedagogy and research.