Introduction: the Generative Pre-trained Transformer (GPT) is a deep learning language model architecture developed by OpenAI.
Aim: to describe the knowledge networks (both at the theoretical and country levels) of the Generative Pre-trained Transformer (GPT) as an emerging technology.
Results: 222 Documents were identified, of which 69 were articles, 50 were conference papers, 36 were editorials, 29 were notes, 19 were letters, 14 were reviews, 3 were conference reviews, and 2 were short surveys. In terms of the number of documents per year, 2 were found in 2019, 10 in 2020, 22 in 2021, 44 in 2022, and 144 in 2023. The year-on-year growth rate was over 100% in all years. The subject area with the highest number of documents was Computer Science with 90 documents. The most productive countries in relation to GPT were the United States with 60 documents, followed by China with 19, the United Kingdom with 18, India with 15, and Australia with 12. Co-occurrence illustrated the centrality of Artificial Intelligence, Natural Language Processing, Deep Learning, and the term Human around ChatGPT and GPT.
Conclusions: this bibliometric study aimed to describe the knowledge networks of the Generative Pre-trained Transformer (GPT) as an emerging technology. Although only 222 documents were found, this study revealed a high level of international scientific collaboration in the field. The results suggest that GPT is a highly relevant technology with a wide range of potential applications in natural language processing, artificial intelligence, and deep learning.
Moreover, the study was able to qualitatively characterize the main thematic areas surrounding GPT, including its applications in chatbots, text generation, machine translation, sentiment analysis, and more.