Efforts to set standards for artificial intelligence (AI) reveal striking patterns: technical experts hailing from geopolitical rivals, such as the United States and China, readily collaborate on technical AI standards within transnational standard‐setting organizations, whereas governments are much less willing to collaborate on global ethical AI standards within international organizations. Whether competition or cooperation prevails can be explained by three variables: the actors that make up the membership of the standard‐setting organization, the issues on which the organization's standard‐setting efforts focus, and the “games” actors play when trying to set standards within a particular type of organization. A preliminary empirical analysis provides support for the contention that actors, issues, and games affect the prospects for cooperation on global AI standards. It matters because shared standards are vital for achieving truly global frameworks for the governance of AI. Such global frameworks, in turn, lower transaction costs and the probability that the world will witness the emergence of AI systems that threaten human rights and fundamental freedoms.