Released less than three months ago, ChatGPT became the center of attention of scholars around the world. This artificial intelligence (AI) language model has over 100 million subscribers worldwide and generates many discussions concerning its accuracy, advantages, and threats to science and education. Its accuracy in law, linguistics, mathematics, and medicine has already been evaluated. Most results suggest that ChatGPT could generate a passing grade in these domains. Its performance in combined subjects or social sciences has yet to be tested. The large amount of information in this general area may yield more accurate performance. Still, specific subjects in the field, with controversial research findings, can lead to significant errors, which teachers and researchers could quickly spot. In this study, ChatGPT was tested on its accuracy on exercise addiction, a subject in sports sciences and psychology associated with more than 1,000 publications. ChatGPT gave several correct answers to 20 questions but failed the test with 45%. Its performance was like in other already tested subjects. However, when prompted to write a general introductory editorial on AI’s role in sports, ChatGPT performed well. Plagiarism detectors could not identify the AI-originated text, but AI detectors did. Therefore, it can be concluded that the system does a relatively good job on general issues but needs further development in more specific areas. Students and scholars cannot rely on ChatGPT to do their job. Still, future versions could yield dilemmas of originality since the system does not provide information for its source(s) of information.