Artificial intelligence (AI) has the potential to improve life and reduce risks by providing large amounts of information embedded in big databases and by suggesting or implementing automated decisions under uncertainties. Yet, in the design of a prescriptive AI algorithm, some problems may occur, first and clearly, if the AI information is wrong or incomplete. But the main point of this article is that under uncertainties, the decision algorithm, rational or not, includes, in one way or another, a risk attitude in addition to deterministic preferences. That risk attitude implemented in the software is chosen by the analysts, the organization that they serve, the experts who inform them, and more generally by the process of identifying possible options. The problem is that it may or may not represent, as it should, the preferences of the actual decision maker (the risk manager) and of the people subjected to his/her decisions. This article briefly describes the sometimes‐serious problem of that discrepancy between the preferences of the risk managers who use an AI output, and the risk attitude embedded in the AI system. The recommendation is to make these AI factors as accessible and transparent as possible and to allow for preference adjustments in the model if needed. The formulation of two simplified examples is described, that of a medical doctor and his/her patient when using an AI system to decide of a treatment option, and that of a skipper in a sailing race such as the America's Cup, receiving AI‐processed sensor signals about the sailing conditions on different possible courses.