BackgroundArtificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the responsible authors’ use of AI chatbots.MethodsThis study performed a cross-sectional audit on the publicly available policies of 163 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently in duplicate with content analysis reviewed by a third contributor (September 2023 - December 2023). Data was categorized into policy elements, such as ‘proofreading’ and ‘image generation’. Counts and percentages of ‘yes’ (i.e., permitted), ‘no’, and ‘N/A’ were established for each policy element.ResultsA total of 56/163 (34.4%) STM academic publishers had a publicly available policy guiding the authors’ use of AI chatbots. No policy allowed authorship accreditations for AI chatbots (or other generative technology). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI tools by authors.ConclusionsOnly a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.