The human interaction with machines has come a long way from the ancient times to the Artificial Intelligence (AI) tools today, threatening individuals and organisations in various professions. ChatGPT (CGPT) has entered on the scenes to replace researchers and research assistants, creating a hysertia and hype at every level that CGPT will replace science. Although it is dubbed as a substitute of scholars, CGPT, its true implications can be judged by the error of omission versus commission to answer whether CGPT plays an effective role in the efficiency and effectiveness of supporting research and practice. Based on the framework of errors of commission and omission, this article tests the function of CGPT as a research assistant, which answers theoretical question why biases occur (if any) and how they occur, and a practical question how to prevent them. This article is based on an experiment with CGPT to test whether it can produce (a) summaries based on citations, (b) citations based on summaries it produced, and (c) citations based on the published abstract of the research article in the literature. For consistency, this study uses 1 author (who was able and willing to participate), 34 publications in referred journals, multiple experiments. The result shows three patterns. First, CGPT produced summaries of all citations correctly, and the proximity between the abstract and summary ranged from 5 to 10 with average about 7 on 10-scale. Second, the summary to citation was 100% inaccurate and biased. Third, the link from the abstract to citation was 100% biased. In the theory of errors of omission and errors of commission, this study explains where, how and why those errors occur in the contextualised world.