1. Introduction. This note is concerned with the problem of selecting the best one (or any other specified number) of several populations. It is restricted to the symmetric case where typically the observations consist of samples of equal size from the different populations. For certain families of distributions, and Bahadur and Goodman {1952) have proved that the natural selection procedure uniformly minimizes the risk among all symmetric procedures for a large class of loss functions. In Section 2 we give an alternative proof of this theorem, and in Section 3 show that the theorem implies many other optimum properties including one obtained in a different manner by Hall (1959).The problem of selecting the best one of s populations is a finite decision prob- for all x. We suppose that the distribution P6 of X depends on the parameter() and that the loss resulting from decision d, when () is the true parameter value isCorresponding to the symmetry assumed for the selection problem, we shall assume that the problem is invariant under the finite transformation group G = {g1, · · · , gN}: if the distribution of X is d[X} = P6, the random variable g;X has distribution d [g,X] = Pu,6 where g, and g; are 1:1 mappings respectively of the sample space and of the parameter space onto themselves; furthermore there exist transformations g1*, · · · , gN * of the decision space (i.e. permutations of d1 , · · · , d.) such that for any i, j and ()A procedure IP is then said to be invariant if (2) g*tp(x) = IP(gx) for all x and g.The procedure taking on the value tp(gx) at the point x will be denoted by tpg, and (2) can then be written as g* tpg-1 = IP· To prove that their procedure uniformly minimizes the risk among all invariant procedures, Bahadur and Goodman first characterize the totality of invariant procedures. An altenative proof can be based on the following lemma concerning general finite invariant decision problems.