Video Anomaly Detection refers to the concept of discovering activities in a video feed that deviate from the usual visible pattern. It is a very well-studied and explored field in the domain of Computer Vision and Deep Learning, in which automated learning-based systems are capable of detecting certain kinds of anomalies at an accuracy greater than 90%. Deep Learning based Artificial Neural Network models, however, suffer from very low interpretability. In order to address and design a possible solution for this issue, this work proposes to shape the given problem by means of graphical models. Given the high flexibility of compositing easily interpretable graphs, a great variety of techniques exist to build a model representing spatial as well as temporal relationships occurring in the given video sequence. The experiments conducted on common anomaly detection benchmark datasets show that significant performance gains can be achieved through simple re-modelling of individual graph components. In contrast to other video anomaly detection approaches, the one presented in this work focuses primarily on the exploration of the possibility to shift the way we currently look at and process videos when trying to detect anomalous events.