This paper examines how predictive algorithms construct risk by calculating and anticipating children's uncertain futures. Theoretically, we analyze algorithmic risk construction by attending to (a) the problematizations justifying algorithmic prediction, (b) their underpinning data infrastructures, and (c) the configurations of agencies across humans and machines. Empirically, we examine two experiments in Danish child protection services that developed algorithmic models to predict children's maltreatment. Our analysis highlights how algorithmic predictions can create different notions of risk. The first case used predictive algorithms to supplement human risk assessments with data from child protection services, while the second case aimed to detect risk early by constructing parents as risk factors, requiring data from other welfare sectors. By comparing these cases, we highlight two distinct risk constructions: one that uses algorithmic prediction to manage uncertainty and another that seeks to eliminate undesired futures by preempting risk. These different constructions have implications for how the present is viewed as a moment of intervention and for how families are constructed as “risk objects.”