Artificial intelligence systems are increasingly being used in industrial applications, security and military contexts, disaster response complexes, policing and justice practices, finance, and healthcare systems. However, disruptions to these systems can have negative impacts on health, mortality, human rights, and asset values. The protection of such systems from various types of destructive influences is thus a relevant area of research. The vast majority of previously published works are aimed at reducing vulnerability to certain types of disturbances or implementing certain resilience properties. At the same time, the authors either do not consider the concept of resilience as such, or their understanding varies greatly. The aim of this study is to present a systematic approach to analyzing the resilience of artificial intelligence systems, along with an analysis of relevant scientific publications. Our methodology involves the formation of a set of resilience factors, organizing and defining taxonomic and ontological relationships for resilience factors of artificial intelligence systems, and analyzing relevant resilience solutions and challenges. This study analyzes the sources of threats and methods to ensure each resilience properties for artificial intelligence systems. As a result, the potential to create a resilient artificial intelligence system by configuring the architecture and learning scenarios is confirmed. The results can serve as a roadmap for establishing technical requirements for forthcoming artificial intelligence systems, as well as a framework for assessing the resilience of already developed artificial intelligence systems.