Women are generally informed that mammography screening reduces the risk of dying from breast cancer by 25%. Does that mean that from 100 women who participate in screening, 25 lives will be saved? Although many people believe this to be the case, the conclusion is not justified. This figure means that from 1,000 women who participate in screening, 3 will die from breast cancer within 10 years, whereas from 1,000 women who do not participate, 4 will die. The difference between 4 and 3 is the 25% "relative risk reduction." Expressed as an "absolute risk reduction," however, this means that the benefit is 1 in 1,000, that is, 0.1%. Cancer organizations and health departments continue to inform women of the relative risk reduction, which gives a higher number-25% as compared to 0.1%-and makes the benefit of screening appear larger than if it were represented in absolute risks.The topic of this chapter is the representation of information on medical risks. As the case of mammography screening illustrates, the same information can be presented in various ways. The general point is that information always requires representation, and the choice between alternative representations can influence the patients' willingness to participate in screening, or more generally, patients' understanding of risks and choices of medical treatments. The ideal of "informed consent" can only be achieved if the patient knows about the pros and cons of a treatment, or the chances that a particular diagnosis is right or wrong. However, in order to communicate such uncertainties to the patients, the physician has to first understand statistical information and its implications. This requirement sharply contrasts with the fact that physicians are rarely trained in risk communication, and some still think that medicine can dispense with statistics and psychology. Such reluctance may also explain why previous research observed that a majority of physicians do not use relevant statistical information properly in diagnostic inference. Casscells, Schoenberger, and Grayboys (1978), for instance, asked 60 house officers, students, and physicians at the Harvard Medical School to estimate the probability of an unnamed disease given the following information:If a test to detect a disease whose prevalence is 1/1,000 has a false positive rate of 5 per cent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs? (p. 999) The estimates varied wildly, from the most frequent estimate, 95% (27 out of 60), down to 2% (11 out of 60). The value of 2% is obtained by inserting the problem information into Bayes' rule (see below)-assuming that the sensitivity of the test, which is not specified in the problem, is approximately 100%. Casscells et al. (1978) concluded that "(...) in this group of students and physicians, formal decision analysis was almost entirely unknown and even common-sense reasoning about the interpretation of laboratory data was uncommon...