Purpose
To determine the impact of different numbers of visual field tests per visit for detecting mean deviation changes over time in patients with early glaucoma or suspected glaucoma and to identify a practical approach to maximize change detection.
Methods
Intrasession (
n
= 322) and intersession (
n
= 323) visual field results for patients with glaucoma or suspected glaucoma were used to model mean deviation change in 10,000 progressing and 10,000 non-progressing computer-simulated patients over time. Variables assessed in the model included follow-up intervals (0.5, 1, or 2 years), reliability rates (70%, 85%, or 100%) and number of visual field tests performed at each visit (one to four).
Results
Two visual field tests per session compared with one provided higher case detection rates at 2 years (99%–99.8% vs. 34.7%–76.3%, respectively), reduced time to detection (three or four visits vs. six to 10, respectively), and more positive mean deviation score (−4 dB vs. −10 dB, respectively) at the point of mean deviation change identification, especially in the context of unreliable results. Performing two tests per visit offered similar advantages compared with more tests. False positive change detection rates (<2.5%), were similar across all conditions. Patients followed up 6 monthly had less severe mean deviation loss at follow-up compared to 1-year and 2-year follow-up intervals.
Conclusions
Performing two tests per clinical visit at 6 months is practical using SITA-Faster and provides higher detection rates of mean deviation change in comparison with only one test performed per visit and more spaced-out intervals.
Translational Relevance
This model provides guidance for selecting the number of tests per visit to detect mean deviation change.