The methods and tools used by the European Union to counter hybrid threats are identified: from the fight against terrorism to measures aimed at combating economic competitors and political opponents (mainly, to squeeze Russia and China out of European markets). It is concluded that it is not by chance that neither EU institutions nor the research community have worked out a comprehensive definition of operations to combat hybrid threats. A broad understanding of hybrid threats as practically any (depending on the political situation) actions of the opponent serves to justify the application of any counteraction tool. In the fight against global threats such as terrorism, cybercrime, and the spread of false medical data, the EU takes a systemic approach, which makes it possible to assess the level and degree of the convergence of threats to critical infrastructure and the infosphere, as well as the possibilities of counteraction. At the same time, attempts to use economic, legislative, political, and informational tools to achieve one-sided economic, political, and military advantages do not reduce the degree of tension in the EU’s relations with Russia, China, and some other countries, only increasing the number and strength of hybrid threats. This reduces the EU’s ability to achieve strategic autonomy.
Artificial intelligence (AI) is actively being incorporated into the communication process, as AI rapidly spreads and becomes cheaper for companies and other actors to use. AI has traditionally been used to run social media. It is used in the various platforms’ algorithms, bots and deepfake technology, as well as for the purpose of monitoring content and targeting instruments. However, a variety of actors are now increasingly using AI technology, at times with malicious intent. For example, terrorist organizations use bots on social networks to spread their propaganda and recruit new fighters. The rise of crimes involving AI is growing at a rapid pace. The impact of this type of crime is extremely negative – mass protests which demand the restriction of the use of technology, the involvement of manipulated persons in criminal groups, the destruction of the reputation of victims of “smart” slander (sometimes leading to threats to their life and health), etc. Combating these phenomena is a task which falls to security agencies, but also civil society institutions, the academic community, legislators, politicians, and the business community, since the complex nature of the threat requires complex solutions involving the participation of all interested parties. This paper aims to find answers to the following research questions: 1) what are the current threats to the psychological security of society caused by the malicious use of AI on social networks? 2) how do malicious (primarily non-state) actors carry out psychological operations through AI on social networks? 3) what impacts (behavioral, political, etc.) do such operations have on society? 4) how can the psychological security of society be protected using existing approaches as well as innovative ones? The answer to this last question is inextricably linked to the possibilities offered by international cooperation. This paper examines the experiences of Germany and China, two leaders in the field of AI which happen to have different socio-political systems and approaches to a number of international issues. The paper concludes that by increasing international cooperation, it is possible to counter psychological operations through AI more effectively and thereby protect society’s interests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.