Consider the following claim: given the choice between saving a life and preventing any number of people from temporarily experiencing a mild headache, you should always save the life. Many moral theorists accept this claim. In doing so, they commit themselves to some form of ‘moral absolutism’: the view that there are some moral considerations (like being able to save a life) that cannot be outweighed by any number of lesser moral considerations (like being able to avert a mild headache). In contexts of certainty, it is clear what moral absolutism requires of you. However, what does it require of you when deciding under risk? What ought you to do when there is a chance that, say, you will not succeed in saving the life? In recent years, various critics have argued that moral absolutism cannot satisfactorily deal with risk and should, therefore, be abandoned. In this paper, we show that moral absolutism can answer its critics by drawing on—of all things—orthodox expected utility theory.
Many moral theories are committed to the idea that some kinds of moral considerations should be respected, whatever the cost to ‘lesser’ types of considerations. A person's life, for instance, should not be sacrificed for the trivial pleasures of others, no matter how many would benefit. However, according to the decision-theoretic critique of lexical priority theories, accepting lexical priorities inevitably leads us to make unacceptable decisions in risky situations. It seems that to operate in a risky world, we must reject lexical priorities altogether. This paper argues that lexical priority theories can, in fact, offer satisfactory guidance in risky situations. It does so by equipping lexical priority theories with overlooked resources from decision theory.
It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.
Non-Consequentialist moral theories posit the existence of moral constraints: prohibitions on performing particular kinds of wrongful acts, regardless of the good those acts could produce. Many have argued that such theories cannot give satisfactory verdicts about what we morally ought to do when there is some probability that we will violate a moral constraint. In this article, I defend Non-Consequentialist theories from this critique. Using a general choice-theoretic framework, I identify various types of Non-Consequentialism that have otherwise been conflated in the debate. I then prove a number of formal possibility and impossibility results establishing which types of Non-Consequentialism can—and which cannot—give us adequate guidance through through a risky world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.