Internalism about a person's good is roughly the view that in order for something to intrinsically enhance a person's well-being, that person must be capable of caring about that thing. I argue in this paper that internalism about a person's good should not be believed. Though many philosophers accept the view, Connie Rosati provides the most comprehensive case in favor of it. Her defense of the view consists mainly in offering five independent arguments to think that at least some form of internalism about one's good is true. But I argue that, on closer inspection, not one of these arguments succeeds. The problems don't end there, however. While Rosati offers good reasons to think that what she calls 'two-tier internalism' would be the best way to formulate the intuition behind internalism about one's good, I argue that two-tier internalism is actually false. In particular, the problem is that no substantive theory of well-being is consistent with two-tier internalism. Accordingly, there is reason to think that even the best version of internalism about one's good is in fact false. Thus, I conclude, the prospects for internalism about a person's good do not look promising.
Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime.
This article introduces the main conceptual and normative questions about willful ignorance. The first section asks what willful ignorance is, while the second section asks why—and how much—it merits moral or legal condemnation. My approach is to critically examine the criminal law's view of willful ignorance. Doing so not only reveals the range of positions one might take about the phenomenon but also sheds light on foundational questions about the nature of culpability and the relation between law and morality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.