If you want ECAs on your team, you'll want them to understand how you do things. You'll also want to understand how they do things. Ideally your team's actions should be synchronized, yet complementary to some degree. That is, you seek an efficient and feasible division of labour. The ``you-things'' and the ``they-things'' have to be complete (they have to cover all the sub-tasks leading to the goal), and as sound as possible (any overlap decreases efficiency). This paper argues that, due to the impenetrability of beliefs, an artificial agent will be unable to join a group with such synchronized diversity by attempting to find a balance between its own beliefs and preferences and others' beliefs and preferences (i.e., a theory of mind). Instead, successful group membership requires ignoring individual utility, and taking actions to make the world (an everyone in it) as predictable as possible. Agents will be more predictable if they not only do the ``they-things,'' but also make it clear to others what those things are. However, the group's goals are shaped by the actions of its members, and so a boundary that identifies group membership is necessary. In essence, all agents must be able to identify to which group they belong. After filling in the argument, I give a short introduction to an emotional identity theory that may provide a way forward, and attempt to convince you that you will want ECAs on your team, but only after solving this division of labour.