Today modern researches suggest that robotic traffic on web resources prevails over user traffic in terms of volume and intensity. Web robots threaten data privacy, copyright, as well as affect performance, security, and affect statistics. There is a need to develop efficient detection and protection methods against web robots. Existing techniques involve the use of syntactic and analytical processing of web server logs to detect web robots. This article proposes to analyze the graph of visits of web robots, taking into account the time, as well as the connectivity of topics of the visited pages. In the article we provide an algorithm for data selection and cleansing, extracting semantic features of pages on a web resource, as well as the proposed detection parameters. We describe in detail the process of forming the ground truth and the principles of existing sessions labelling to the legit and robotic types. It is proposed to use the capabilities of a web server to identify sessions uniquely. The clustering procedure and the selection of a suitable classification model are discussed. For each of the studied models, the selection of hyper parameters and cross-validation of the results are made. The analysis of performance and detection accuracy, as well as comparison with the results of existing approaches is provided. Empirical results of the proposed method on web-resources show that this method leads to better web robot detection accuracy and precision comparing with the existing approaches.