Summative assessments (SAs) and formative assessments (FAs) fulfil complementary functions in the educational endeavour. The former measure knowledge acquired at the end of an educational unit in a standardised, high-stakes setting. FAs, by contrast, aim to assess student performance as part of daily classroom activities in order to tailor feedback and instruction. The use of computer-based FA (CBFA) systems has made it technically feasible to collect unprecedented amounts of longitudinal data objectively and with minimal interference for students, under conditions that more closely resemble real-life behaviour. In this paper, we investigated whether and how well FA outcomes can predict SA outcomes in a large sample of children evaluated at different time points during compulsory schooling. To this end, we estimated student abilities using Item Response Theory and performed a systematic comparison of regression models trained to predict SA abilities on different subsets of features derived from FA abilities and auxiliary variables. A model that included mean abilities in different competence domains performed best, and its predictions accounted for a considerable amount of variance, although the proportion of variation explained was still below that predicted by past SA measures. The FA features showed specificity in the sense that the most predictive features generally tended to correspond to abilities from the same or a similar competence domain as the predicted SA ability. Even though CBFA systems implement objective data-collection procedures, we observed systematic biases in the predictions that would need to be taken into consideration when using the models for decision-making.