The study was undertaken to reflect on the values of modern artificial intelligence systems in the context of the use of such systems in legal, especially law enforcement activities. The application of artificial intelligence in the research focuses on the properties of fairness, accountability, and transparency. Fairness should exclude distortions in the operation of artificial intelligence systems, caused by the settings of scales or the specifics of the dataset collected for training the system. Accountability is seen as the property of an AI system to protect user data that is included in a dataset or processed by an AI system. Transparency, on the other hand, reflects the ability to verify the decision logic of an AI system and reverse-engineer its algorithm. This property is currently the least attainable, but it is directly related to the evaluation of the effectiveness of AI, and hence to the possibilities of integrating such systems into legal activities. This paper uses the current understanding of the capabilities of systems based on machine learning methods: convolutional artificial neural networks and transformer networks. The study reveals differences and discussions of AI perspectives in legislation and the state of legal regulation, public and academic approaches to this issue in the European Union, the USA, Canada, Singapore, China, Russia, and Kazakhstan. As a result, the study proposes a set of recommendations for banning/restricting the use of artificial intelligence and decision support systems, considering national and international legislation.