The risks AI presents to society are broadly understood to be manageable through ‘general calculus’, i.e., general frameworks designed to enable those involved in the development of AI to apprehend and manage risk, such as AI impact assessments, ethical frameworks, emerging international standards, and regulations. This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert. It reveals that risk and risk management is dependent on mundane situated practices not encapsulated in general calculus. Situated practice surfaces ‘iterable epistopics’, revealing how those involved in the development of AI know and subsequently respond to risk and uncover major challenges in their work. The ongoing discovery and elaboration of epistopics of risk in AI (a) furnishes a potential program of interdisciplinary inquiry, (b) provides AI developers with a means of apprehending risk, and (c) informs the ongoing evolution of general calculus.