With the emergence of clonal selection ideas in the 1950s, the development of immune cell repertoires was seen to require the negative selection of self‐reacting cells, with surviving cells exhibiting a broad range of specificities. Thus, confronting a universe of not‐self‐antigens, a potential host organism spread its resources widely. In the 1960s, the two signal hypothesis showed how this might work. However, in the 1970s an affinity/avidity model further proposed that anticipating a pathogen strategy of exploiting “holes” in the repertoire created by negative selection, hosts should also positively select near‐self‐reacting cells. A microbe mutating an antigen from a form foreign to its host to a form resembling that host should prevail over host defences with respect to that antigen. By mutating a step towards host self, along the path from non‐self to self, it should come to dominate the microbe population. By progressive stepwise mutations, such microbes would become better adapted, to the detriment of their hosts. But they would lose this advantage if, as they mutated closer to host self, they encountered progressively stiffer host defences. Thus, as described in the affinity/avidity model, positive selection of lymphocytes for specificities that were very close to, but not quite, anti‐self (ie, “anti‐near‐self”) should be an important host adaptation. While positive selection affects both B and T cells, mechanisms are uncertain. Converging evidence from studies of lymphocyte activation, either polyclonally (with lectins as “antigen‐analogs”) or monoclonally (by specific antigen), supports the original generic affinity/avidity model for countering mutations towards host self.