Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.Across the AI ethics and global policy landscape, there is consensus that there should be human accountability for AI technologies 1 . These machines are used for high-stakes decision-making in complex domains -for example, in healthcare, criminal justice and transport -where they can cause or occasion serious harm. Some use deep machine learning models, which can make their outputs difficult to understand or contest. At the same time, when the datasets on which these models are trained reflect bias against specific demographic groups, the bias becomes encoded and causes disparate impacts 2-4 . Meanwhile, an increasing number of machines that embody AI, and specifically machine learning, such as highly automated vehicles, can execute decision-making functions and take actions independently of direct, real-time human control, in unpredictable conditions that call for adaptive performance. This development can make human agency seem obscure. Considering these problems, a heterogeneous group of researchers and organizations have called for stronger, more explicit regulation and guidelines to ensure accountability for AI and autonomous systems 1,[5][6][7] .But what do we mean by 'accountability', and do we all mean the same thing? Accountability comes in different forms and varieties across rich and overlapping strands of academic literature in the humanities, law and social sciences. Scholars in the AI ethics field have recently proposed systematic conceptualizations of accountability to address this complexity [8][9][10][11] . Several researchers in the field 8,10 take explicit inspiration from Bovens's influential analysis of accountability as a social relation, in which he describes accountability as: "a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences" 12 .A welcome development within the AI ethics landscape would be greater conceptual clarity on the distinction between the 'explaining' and 'facing the consequences' features of accountability, as well as the relation between them.This matters ethically, legally and politically, as these two core features of accountability -that is, giving an explanation, and facing the consequences -can come apart and pull in different directions. We highlight them because, as the quotation illustrates, they represent a central bifurcation of the concept of accountability 12,13 . In addition, their relation is particularly complex when it comes to AI technologies.