Digital mental health interventions (DMHIs) present a promising way to address gaps in mental health service provision. However, the relationship between user engagement and outcomes in the context of these interventions has not been established. This study addressed the current state of evidence on the relationship between engagement with DMHIs and mental health outcomes. MEDLINE, PsycINFO, and EmBASE databases were searched from inception to August 1, 2021. Original or secondary analyses of randomized controlled trials (RCTs) were included if they examined the relationship between DMHI engagement and post-intervention outcome(s). Thirty-five studies were eligible for inclusion in the narrative review and 25 studies had sufficient data for meta-analysis. Random-effects meta-analyses indicated that greater engagement was significantly associated with post-intervention mental health improvements, regardless of whether this relationship was explored using correlational [r = 0.24, 95% CI (0.17, 0.32), Z = 6.29, p < 0.001] or between-groups designs [Hedges' g = 0.40, 95% CI (0.097, 0.705), p = 0.010]. This association was also consistent regardless of intervention type (unguided/guided), diagnostic status, or mental health condition targeted. This is the first review providing empirical evidence that engagement with DMHIs is associated with therapeutic gains. Implications and future directions are discussed.Systematic Review Registration: PROSPERO, identifier: CRD 42020184706.
BackgroundThere is growing research evidence that subclinical autistic traits are elevated in relatives of individuals with autism spectrum disorder (ASD), continuously distributed in the general population and likely to share common etiology with ASD. A number of measures have been developed to assess autistic traits quantitatively in unselected samples. So far, the Quantitative-Checklist for Autism in Toddlers (Q-CHAT) is one of very few measures developed for use with toddlers as young as 18 months, but little is known about its measurement properties and factor structure.MethodsThe present study examined internal consistency, factor structure, test-retest stability, and convergent validity of the Q-CHAT in a sample of toddlers in Singapore whose caregivers completed the Q-CHAT at 18 (n = 368) and 24 months (n = 396).ResultsThree factors were derived accounting for 38.1 % of the variance: social/communication traits, non-social/behavioral traits, and a speech/language factor. Internal consistency was suboptimal for the total and speech/language scores, but acceptable for the social/communication and non-social/behavioral factor scores. Scores were generally stable between 18 and 24 months. Convergent validity was found with the Pervasive Developmental Disorders subscale of the Child Behavior Checklist (CBCL) completed by caregivers when their children were 24 months. Q-CHAT total scores in this sample were higher than those reported in other unselected samples from the UK.ConclusionsThe Q-CHAT was found to have a three-factor structure, acceptable internal consistency for its two main factor scores (social/communication and non-social/behavioral), normally distributed scores in an unselected sample, and similar structure and measurement properties as those reported in other published studies. Findings are discussed in relation to existing literature and future directions for the validation of the Q-CHAT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.