This study reports on a novel phenomenon observed in scholarly publications. Some research articles unrelated to the field of artificial intelligence (AI) generate content (AIGC contain confusing phrases such as “As an AI language model...”. We conceptualize this phenomenon as “AIGC footprints”. To provide early evidence, we conducted a small-scale sample investigation by collecting 25 articles. We found that the appearance of AIGC footprints corresponds to the time when the public version of ChatGPT was launched. These 25 articles were published by authors from countries in Central Asia, South Asia, and Africa. Among these authors, there were assistant professors (n = 5), Ph.D. researcher (n = 6), as well as Ph.D. and master’s students (n = 3). Single authors (n = 16) and single affiliations (n = 23) were more common. Analysis of the article content revealed that some authors utilized ChatGPT for literature reviews (n = 11) or idea generation (n = 11). Articles with AIGC footprints are widely distributed across various professional fields, such as Communication and Media Studies (n = 3), Cybersecurity (n = 2), Civil Engineering (n = 2), and Agricultural Technology (n = 2). The 25 articles with AIGC footprints were published in 18 different academic journals. Most of the academic journals did not disclose their APCs on their websites (n = 11), nor were they indexed by Web of Science, Scopus, and DOAJ (n = 17). The emergence of AIGC footprints reflects the potential challenges faced by scholarly publishing and higher education in ensuring quality assurance, as well as indicating potential problems in research integrity. We provide several recommendations, including the development of best research practice guidelines in the context of AIGC, integrating transparent use of AIGC into higher education instruction, and fostering ethical leadership.