With the introduction of routine ELISA screening of blood for antibodies to HIV in March of 1985 ( l ) , it became apparent that HIV ELISA tests were among the best developed by the diagnostic industry. Sensitivities and specificities of greater than 99 % were reported and substantiated by nearly 100,OOO tests industrywide in preclinical and clinical evaluations prior to FDA approval (2,3,4). With such outstanding results, why the need for confirmation?As large numbers of random blood donors were tested, it became clear that the distribution of ELISA absorbance values did not assume a normal shape but was positively skewed (5,6). The cutoffs for each of the licensed screening tests were set within the tail of this distribution such that 0.2-1 .O% of random donors were considered initially reactive. Following FDA recommendations, these initially reactive samples were then tested in duplicate. Units that were repeatedly reactive were then discarded (1).If only the disposition of blood were involved, ELISA results might be sufficient. Notification of donors, however, was predicated on the ability to confirm the ELISA reactivity with a Western blot. A disparity between the repeatable reactives in the tail of the normal distribution and the "true" positives confirmed by Western blot was immediately realized. Infection with HIV, as confirmed by Western blot, was not equal to the frequency of ELISA reactivity in the random donor population (3,7). The positive predictive value of a repeatably reactive ELISA result is directly related to the actual prevalence of infection, as estimated by Western blot confirmed positives (8). Since the sensitivity and specificity of ELISA tests are known, the probability that a repeatedly reactive ELISA test is a true positive can be estimated using Bayes theorem (9). Applying the equation:where P = the prevalence of infection, the percentage of false positive (FP) results can be predicted since FP = 100% -PPV. This relationship between prevalence and false positivity is illustrated in Figure 1. If we assume test sensitivities and specificities of 99.9%, as the prevalence decreases 0 1988 Alan R. Liss, Inc. from 1.0 to 0.04%, the percentage of false positives increases from < 10% to over 70%.Using this model, if 68 million units of blood are tested annually worldwide, 40,000 (99.9% specificity) will be repeatably reactive by ELISA and discarded. Of these, up to 47,600 (70%) donors would be deferred from donating blood, even though they are not infected with HIV. In 1985 the American Red Cross reported that of 868,000 units of blood tested, 1,455 were discarded, but the antibody to HIV could be positively confirmed in only 333 donors, for a projected prevalence of 0.038% (7). This figure agrees well with theoretical estimates of prevalence based on the number of diagnosed AIDS individuals and the frequency of transfusion associated AIDS prior to 1988 (8). Therefore, when the prevalence of a disease is < 0.1 %, the probability of false positive results is extremely high and the need for app...