BACKGROUND: Real-time automated analysis of videos of the microvasculature is an essential step in the development of research protocols and clinical algorithms that incorporate point-of-care microvascular analysis. Validation studies on the software packages developed to perform these analyses have reported low agreement with the current referent standard semi-automated analysis method. In response to the call for validation studies of available automated analysis software by the European Society of Intensive Care Medicine, we report the first human validation study of AVA 4.0. METHODS: Two retrospective perioperative datasets of human microcirculation videos (P1 and P2) and one prospective healthy volunteer dataset (V1) were used. Video quality was assessed using the Microcirculation Image Quality Selection (MIQS) score. Videos were initially analysed with (1) AVA software 3.2 by two experienced users through a semi-manual method, followed by an analysis with (2) AVA automated software 4.0 for perfused vessel density (PVD), total vessel density (TVD), and proportion of perfused vessels (PPV). Bland-Altman analysis and intraclass correlation coefficients (ICC) were used to measure agreement between the two methods. Each method’s ability to discriminate between microcirculatory states before and after induction of general anesthesia was assessed using paired t-tests. RESULTS: Fifty-two videos from P1, 128 videos from P2 and 26 videos from V1 met inclusion criteria for analysis. Correlational analysis and Bland Altman analysis revealed poor agreement and no correlation between AVA 4.0 and AVA 3.2. Increasing video length did not improve agreement. Automated analysis consistently underestimated measures of vessel density. Following the induction of anesthesia, TVD and PVD measured using AVA 3.2 increased significantly for P1 and P2 (p < 0.05). However, these changes could not be replicated with the data generated by AVA 4.0. CONCLUSIONS: AVA 4.0 is not a suitable tool for research or clinical purposes at this time. Future validation studies of automated microvascular flow analysis software should aim to measure the new software’s agreement with the referent standard, its ability to discriminate between clinical states and the quality thresholds at which its performance becomes unacceptable.