Background
Monitoring and managing data returns in multi-centre randomised controlled trials is an important aspect of trial management. Maintaining consistently high data return rates has various benefits for trials, including enhancing oversight, improving reliability of central monitoring techniques and helping prepare for database lock and trial analyses. Despite this, there is little evidence to support best practice, and current standard methods may not be optimal.
Methods
We report novel methods from the Trial of Imaging and Schedule in Seminoma Testis (TRISST), a UK-based, multi-centre, phase III trial using paper Case Report Forms to collect data over a 6-year follow-up period for 669 patients. Using an automated database report which summarises the data return rate overall and per centre, we developed a Microsoft Excel-based tool to allow observation of per-centre trends in data return rate over time. The tool allowed us to distinguish between forms that can and cannot be completed retrospectively, to inform understanding of issues at individual centres. We reviewed these statistics at regular trials unit team meetings. We notified centres whose data return rate appeared to be falling, even if they had not yet crossed the pre-defined acceptability threshold of an 80% data return rate. We developed a set method for agreeing targets for gradual improvement with centres having persistent data return problems. We formalised a detailed escalation policy to manage centres who failed to meet agreed targets. We conducted a post-hoc, descriptive analysis of the effectiveness of the new processes.
Results
The new processes were used from April 2015 to September 2016. By May 2016, data return rates were higher than they had been at any time previously, and there were no centres with return rates below 80%, which had never been the case before. In total, 10 centres out of 35 were contacted regarding falling data return rates. Six out of these 10 showed improved rates within 6–8 weeks, and the remainder within 4 months.
Conclusions
Our results constitute preliminary effectiveness evidence for novel methods in monitoring and managing data return rates in randomised controlled trials. We encourage other researchers to work on generating better evidence-based methods in this area, whether through more robust evaluation of our methods or of others.