Much of decision aiding uses a divide-and-conquer strategy to help people with risky decisions. Assessing the utility of outcomes and one's degree of belief in their likelihood are assumed to be separable tasks, the results of which can then be combined to determine the preferred alternative. Evidence from different areas of psychology now provides a growing consensus that this assumption is too simplistic. Observed dependencies in the evaluation of uncertain outcomes and the likelihood of the events giving rise to them are frequent and systematic. Dependencies seem to derive from general strategic processes that take into consideration asymmetric costs of over-vs. underestimates of uncertain quantities. This asymmetric-loss-function interpretation provides a psychological explanation for observed judgments and decisions under uncertainty and links them to other judgment tasks. The decision weights estimated when applying dependent-utility models to choices are not simply reflections of perceived subjective probability but a response to several constraints, all of which modify the weight of risky or uncertain outcomes.Perhaps more than any other social science, psychology maintains an ongoing debate about its status as a coherent field of scholarship (cf. Fowler, 1990;Koch, 1969;Simon, 1992), often expressing genuine concern about the paucity of established and cumulative results and theories. Thus, note the emergence of a consensus on an important behavioral fact from different areas of psychology as well as economics. This article brings together some commonalities in results and in the mechanisms designed to explain them. These should be of theoretical as well as practical interest to anybody interested in human judgments and decisions. With this interpretative review, I attempt to present these often technical results and theories in an integrative and more accessible way. I argue that people's behavior in the judgment and decision situations discussed can be seen as responsive to self-or outwardly imposed constraints in their environment rather than as the result of perceptual or cognitive errors. In particular, I suggest that in situations in which an uncertain quantity needs to be assessed, for example, the probability with which some event will occur or the value of Parts of this article were presented as an invited address at the 23rd Annual Meeting of the Society of Mathematical Psychology. This article was completed while I was a Fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, California, with financial support from National Science Foundation Grant SES-9022192 and the Graduate School of Business, University of Chicago.I am grateful to Michael Birnbaum, David Budescu, Ward Edwards, Ido Erev, Bill Estes, Claudia Gonzalez, Danny Kahneman, Lola Lopes, Duncan Luce, Barbara Mellers, Robin Hogarth, David Sears, Zur Shapira, Abe Tesser, Tom Wallsten, Martin Weber, and two anonymous reviewers for many helpful comments and suggestions.Correspondence concerning this article should ...