In this scientific article, the author examines the existing scientific and theoretical approaches to determine the legal responsibility of artificial intelligence, in particular, robots, which are currently the most prominent representatives of the field of artificial intelligence. As the author notes, the rules and mechanisms for establishing the legal responsibility of robots will change and improve over time, however, a clear regulation of the set of social principles - the rules of the “life” of robots, is required. In the article, the author provides a list of similar principles such as the requirement for identification of robots, the requirement for a warning, the “black box problem”, the requirement for the independence of artificial intelligence from the engineering infrastructure and human factor, the requirement for the usefulness of robotization, the requirement for punishment (removal), the requirement of correct ability and the problem of “turning off” robots. As the main problems of the legal regulation of artificial intelligence, the author points out the autonomy and independence of the creation of artificial intelligence objects, which are now being produced on an industrial scale in many countries. At the same time, the lack of single principles for their creation and operation seriously complicates both the unification of requirements for them and the determination of measures of legal responsibility. In conclusion, it is noted that with a further increase in the use of robots, a balance should be found between the usefulness of robotization and the safety of society.