Over the millennia, people have developed normative standards, legal frameworks, personal capabilities and moral theories for assigning responsibility to a complex interacting web of humans, as well as the groups they form. Where responsibilities lie is not always straightforward, partially because responsibility may include different concepts. Nicole Vincent (2011) distinguishes six responsibility concepts in her taxonomy. First, virtue responsibility, where calling someone responsible is to say something that is good about their character, as exemplified in their reputation for doing what is seen to be the right thing. Second, role responsibility, which can be seen as someone's obligation, given the social or institutional role they have taken on, or that has been assigned to them. Third, outcome responsibility, where being responsible would imply that someone is blameworthy for their actions and/or the outcomes of their actions. Fourth, causal responsibility, whereby to be responsible is to cause or to create the conditions for various outcomes. Fifth, capacity responsibility refers to an individual's mental cognitive and volitional capacities, which determine their moral agency and the extent to which they can be held responsible for their actions. Finally, liability responsibility refers to the act of holding someone responsible for what happened. Holding someone responsible 'refers to the things that someone must do, or how they should be treated, to set things right' (Vincent, 2011, p.18).Increasingly rapid developments in machine learning (ML) have focused the public's interest in the current and future impact of artificial intelligence (AI). Technological advances have led to the emergence of new autonomous AI agents, which, once developed and deployed, will behave in ways that cannot be predicted by their developers and users (Russell, 2019). Unlike previous expert systems, modern statistical AI is based on representational learning methods (LeCun et al., 2015). These methods allow developers to 'feed' algorithms unstructured data, which enables deeplearning algorithms to learn, resulting in the emergence of behaviours which often exceed human ability. Representational learning is implemented via a black box -one can view its input (data) and output (behaviour), but not its internal workings (Castelvecchi, 2016). Therefore, AI agents can behave in ways which are autonomous and unpredictable. AI introduces a novel societal issue -the responsibility gap -where the designer and user of AI are not fully capable of predicting the AI's behaviour (Matthias, 2004). Although AI may have causal efficacy, it is not clear who should be held responsible.