The impact of artificial intelligence technologies, neural networks, and chatbots on science and education has induced widespread discussion in the academic community. It is no longer possible to contain the use of neural networks such as ChatGPT for writing texts, including scientific texts. The present study is done in a qualitative approach. The aim of the study is to analyze the application of large language models on the example of ChatGPT in the scientific publishing activities of Russian scientists. The practice of using chatbots does not always satisfy the user and the scientific community as a whole. On the one hand, the user is confronted with the lack of the requested information. On the other hand, the scientific community, and especially editors and readers of scientific journals, question the feasibility of neural networks due to the shortcomings of large language models that have been widely disputed in scientific publications. This study shows that there is another reason to distrust neural networks. Incompleteness and opacity of the information produced by artificial intelligence is related to the texts on which neural networks are trained. For Russian science, this problem poses a serious threat, since popular artificial intelligence companies use mostly English-language texts to teach their neural nets. The author puts forward the opinion that the social and humanitarian knowledge produced in modern Russia remains outside the scope of texts used for training neural networks. This point of view is supported by research by Russian scientists on Arctic governance. The data is absent in the English-language texts of ChatGPT but reflected in Russian-language publications.