Background
Randomized controlled trials (RCTs) with vigorous study designs are vital for determining the efficacy of treatments. Despite the high internal validity attributed to RCTs, external validity concerns limit the generalizability of results to the general population. Bias can be introduced, for example, when study participants who self-select into a trial are more motivated to comply with study conditions than are other individuals. These external validity considerations extend to e-mental health (eMH) research, especially when eMH tools are designed for public access and provide minimal or no supervision.
Objective
Clustering techniques were employed to identify engagement profiles of RCT participants and community users of a self-guided eMH program. This exploratory approach inspected actual, not theorized, RCT participant and community user engagement patterns. Both samples had access to the eMH program over the same time period and received identical usage recommendations on the eMH program website. The aim of this study is to help gauge expectations of similarities and differences in usage behaviors of an eMH tool across evaluation and naturalistic contexts.
Methods
Australian adults signed up to myCompass, a self-guided online treatment program created to reduce mild to moderate symptoms of negative emotions. They did so either by being part of an RCT onboarding (160/231, 69.6% female) or by accessing the program freely on the internet (5563/8391, 66.30% female) between October 2011 and October 2012. During registration, RCT participants and community users provided basic demographic information. Usage metrics (number of logins, trackings, and learning activities) were recorded by the system.
Results
Samples at sign-up differed significantly in age (P=.003), with community users being on average 3 years older (mean 41.78, SD 13.64) than RCT participants (mean 38.79, SD 10.73). Furthermore, frequency of program use was higher for RCT participants on all usage metrics compared to community users through the first 49 days after registration (all P values <.001). Two-step cluster analyses revealed 3 user groups in the RCT sample (Nonstarters, 10-Timers, and 30+-Timers) and 2 user groups in the community samples (2-Timers and 20-Timers). Groups seemed comparable in patterns of use but differed in magnitude, with RCT participant usage groups showing more frequent engagement than community usage groups. Only the high-usage group among RCT participants approached myCompass usage recommendations.
Conclusions
Findings suggested that external validity concerns of RCT designs may arise with regards to the predicted magnitude of eMH program use rather than overall usage styles. Following up RCT nonstarters may help provide unique insights into why individuals choose not to engage with an eMH program despite generally being willing to participate in an eMH evaluation study. Overestimating frequency of engagement with eMH tools may have theoretical implications and potentially impact economic considerations for plans to disseminate these tools to the general public.