Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability. 1 In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT. 2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for "generative pretrained transformer"). The release has prompted immediate excitement about its many potential uses 4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations. 5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author. 6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman "author." According to Nature, that article's inclusion of ChatGPT in the author byline was an "error that will soon be corrected." 6 However, these articles and their nonhuman "authors" have already been indexed in PubMed and Google Scholar.Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a "credited author on a research paper" because "attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility." 7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts. 7 Other journals 8,9 and organizations 10 are swiftly developing policies that ban inclusion of these nonhuman technologies as "authors" and that range from prohibiting the inclusion of AI-generated text in submitted work 8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication. 9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: "Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are