The use of centralized raters who are remotely linked to sites and interview patients via videoconferencing or teleconferencing has been suggested as a way to improve interrater reliability and interview quality. This study compared the effect of site-based and centralized ratings on patient selection and placebo response in subjects with major depressive disorder. Subjects in a 2-center placebo and active comparator controlled depression trial were interviewed twice at each of 3 time points: baseline, 1-week postbaseline, and end point--once by the site rater and once remotely via videoconference by a centralized rater. Raters were blind to each others' scores. A site-based score of greater than 17 on the 17-item Hamilton Depression Rating Scale (HDRS-17) was required for study entry. When examining all subjects entering the study, site-based raters' HDRS-17 scores were significantly higher than centralized raters' at baseline and postbaseline but not at end point. At baseline, 35% of subjects given an HDRS-17 total score of greater than 17 by a site rater were given an HDRS total score of lower than 17 by a centralized rater and would have been ineligible to enter the study if the centralized rater's score was used to determine study entry. The mean placebo change for site raters (7.52) was significantly greater than the mean placebo change for centralized raters (3.18, P < 0.001). Twenty-eight percent were placebo responders (>50% reduction in HDRS) based on site ratings versus 14% for central ratings (P < 0.001). When examining data only from those subjects whom site and centralized raters agreed were eligible for the study, there was no significant difference in the HDRS-17 scores. Findings suggest that the use of centralized raters could significantly change the study sample in a major depressive disorder trial and lead to significantly less change in mood ratings among those randomized to placebo.
Results from our small sample illustrate that the clinical interview skills of raters who administered the HAM-D were below what many would consider acceptable. Evaluation and training of clinical interview skills should be considered as part of a rater training program.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.