Automated text generation has shown that AI can assume certain writing tasks, for instance, in automated journalism. So far, however, little attention has been paid to how people deal with this development, primarily due to limited applicational opportunities. But with the release of ChatGPT, it suddenly became relevant how people perceive more complex AI-generated texts. Previous research suggests no differences between human vs. AI authorship regarding message credibility. However, only a few studies investigated actual AI-written content on complex topics or examined sufficiently large samples. In a between-groups experiment (N = 734), we examined readers’ perceptions of AI authorship of a GPT-written science journalism article. The results of an equivalence test showed that labeling a GPT-written text as AI-written vs. human-written reduced the article’s credibility (d = 0.36). Moreover, AI authorship decreased the source’s perceived credibility (d = 0.24), anthropomorphism (d = 0.67), and intelligence (d = 0.41). Thus, the mere perception of an AI as an author of a text has a small but significant effect on credibility perceptions. The findings are discussed against the backdrop of a growing availability of AI-generated content and a greater awareness of AI authorship.