As discussed in Section 6.4 and at the beginning of Section 6.5, the F-test from the ANOVA table allows us to test the null hypothesis "The population means of all of the groups/treatments are equal." The alternate hypothesis is simply that "At least two are not equal." Often this isn't what we want to know! Say we are comparing 20 possible treatments for a disease. The ANOVA F-test (sometimes called the omnibus test), could only tell us that at least one of the treatments worked differently than the others. We might, however, want to be able to rank the 20 from best to worst, and say which of these differences are significant. We might want to compare all the treatments produced by one company to those of another, or maybe all the treatments based on one idea to those based on another.An obvious suggestion in each of these cases would be to simply do a large number of t-tests. To rank the 20 from best to worst, we could simply do a separate t-test for each possible comparison (there are 190 of them). To compare the two companies or two ideas, we could simply group all of the observations from the related methods together and use t-tests to see if they differ. One difficulty with this is that the α-level (probability of a Type I error) may no longer be what we want it to be.
Sidak's FormulaStepping back from the ANOVA setting for a minute, say we wish to conduct one-sample t-tests on twenty completely independent populations. If we set α=0.05 for the first test, that means that: 0.05 = α = P[reject H 0 for test one | H 0 is true for test one]We could write the same for the other nineteen populations as well. If we are concerned about all twenty populations though, we might be more interested in the probability that we reject a true null hypothesis at all. That is, Using the rules of probability, and the fact that we assumed the tests were independent for this example, we can calculate what α T would be if we used α=0.05 for the comparison-wise rate.