AI systems in vital stages of critical decision-making can result in so-called "responsibility gaps": roughly, outcomes for which no human agent can aptly be attributed responsibility. 3 To illustrate, consider the following case:Decision-Procedure Designer. The government intends to create a new law. According to this law, people with modest means can apply and receive monetary aid. Since incoming applications must be processed and decisions about eligibility must be made, the government faces a choice about which type of decision-maker to put in charge: (a) human decisionmakers, or (b) an AI system capable of processing applications and making unilateral decisions.Following the literature, this seems to be a paradigmatic example of how a responsibility gap might come to exist (Kiener, 2022;Danaher, 2016). Those who believe in the existence of responsibility gaps tend to motivate their belief by pointing to the autonomy and complexity of future (and 3 A burgeoning literature discusses whether artificial, non-human agents can be held responsible under certain conditions (see for instance Sebastián 2021; List 2021). We set this discussion aside here, since even if automatons could aptly be held responsible, this would change nothing from the perspective of our argument.1 Throughout the paper, we shall talk about 'AI systems', but we take this to include things like simple rule-based systems, machine learning systems, deep learning systems, etc. 2 See Kraaijeveld (2020); Pagallo (2011); Tigard (2021); Matthias (2004); Sparrow (2007); Rubel et al. (2019).