Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of much concern. In this article, I propose a more optimistic view on artificial intelligence, raising two challenges for responsibility gap pessimists. First, proponents of responsibility gaps must say more about when responsibility gaps occur. Once we accept a difficult-to-reject plausibility constraint on the emergence of such gaps, it becomes apparent that the situations in which responsibility gaps occur are unclear. Second, assuming that responsibility gaps occur, more must be said about why we should be concerned about such gaps in the first place. I proceed by defusing what I take to be the two most important concerns about responsibility gaps, one relating to the consequences of responsibility gaps and the other relating to violations of jus in bello.
The recent decades have seen established liberal democracies expand their surveillance capacities on a massive scale. This article explores what is problematic about government surveillance by democracies. It proceeds by distinguishing three potential sources of concern: (1) the concern that governments diminish citizens’ privacy by collecting their data, (2) the concern that they diminish their privacy by accessing their data, and (3) the concern that the collected data may be used for objectionable purposes. Discussing the meaning and value of privacy, the article argues that only the latter two constitute compelling independent concerns. It then focuses particularly on the third concern, exploring the risk of government surveillance being used to enforce illegitimate laws. It discusses three legitimacy-related reasons why we should be worried about the expansion of surveillance capacities in established democracies: (1) Even established democracies might decay. There is a risk that surveillance capacities that are used for democratically legitimated purposes today will be used for poorly legitimated purposes in the future. (2) Surveillance may be used to enforce laws that lack legitimacy due to the disproportionate punishment attached to their violation. (3) The democratic procedures in established democracies fail to conform to the requirements formulated by mainstream theories of democratic legitimacy. Surveillance is thus used to enforce laws whose legitimacy is in doubt.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.