A growing number of news organisations have set up specific guidelines to govern how they use artificial intelligence (AI). This article analyses a set of 52 guidelines from publishers in Belgium, Brazil, Canada, Finland, Germany, India, the Netherlands, Norway, Sweden, Switzerland, the United Kingdom, and the United States. Looking at both formal and thematic characteristics, we provide comparative insights into how news outlets address both expectations and concerns when it comes to using AI in the news. Drawing from neo- institutional theory and the concept of institutional isomorphism, we argue that the policies show signs of homogeneity, likely explained by isomorphic dynamics which arose as a response to the uncertainty created by the rise of generative AI after the release of ChatGPT in November 2022. Our study shows that publishers have already begun to converge in their guidelines on key points such as transparency and human supervision when dealing with AI-generated content. However, we argue that national and organisational idiosyncrasies continue to matter in shaping publishers’ practices, with both accounting for some of the variation seen in the data. We conclude by pointing out blind spots around technological dependency, sustainable AI, and inequalities in current AI guidelines and providing directions for further research.