Current approaches to risk management place insufficient emphasis on the system knowledge available to the assessor, particularly in respect of the dynamic behavior of the system under threat, the role of human agents (HAs), and the knowledge available to those agents. In this article, we address the second of these issues. We are concerned with a class of systems containing HAs playing a variety of roles as significant system elements-as decisionmakers, cognitive agents, or implementers-that is, human activity systems. Within this family of HAS, we focus on safety and mission-critical systems, referring to this subclass as critical human activity systems (CHASs). Identification of the role and contribution of these human elements to a system is a nontrivial problem whether in an engineering context, or, as is the case here, in a wider social and public context. Frequently, they are treated as standing apart from the system in design or policy terms. Regardless of the process of policy definition followed, analysis of the risk and threats to such a CHAS requires a holistic approach, since the effect of undesirable, uninformed, or erroneous actions on the part of the human elements is both potentially significant to the system output and inextricably bound together with the nonhuman elements of the system. We present a procedure for identifying the potential threats and risks emerging from the roles and activity of those HAs, using the 2014 flooding in southwestern England and the Thames Valley as a contemporary example.