BackgroundSelf-guided, Web-based interventions for depression show promising results but suffer from high attrition and low user engagement. Online peer support networks can be highly engaging, but they show mixed results and lack evidence-based content.ObjectiveOur aim was to introduce and evaluate a novel Web-based, peer-to-peer cognitive reappraisal platform designed to promote evidence-based techniques, with the hypotheses that (1) repeated use of the platform increases reappraisal and reduces depression and (2) that the social, crowdsourced interactions enhance engagement.MethodsParticipants aged 18-35 were recruited online and were randomly assigned to the treatment group, “Panoply” (n=84), or an active control group, online expressive writing (n=82). Both are fully automated Web-based platforms. Participants were asked to use their assigned platform for a minimum of 25 minutes per week for 3 weeks. Both platforms involved posting descriptions of stressful thoughts and situations. Participants on the Panoply platform additionally received crowdsourced reappraisal support immediately after submitting a post (median response time=9 minutes). Panoply participants could also practice reappraising stressful situations submitted by other users. Online questionnaires administered at baseline and 3 weeks assessed depression symptoms, reappraisal, and perseverative thinking. Engagement was assessed through self-report measures, session data, and activity levels.ResultsThe Panoply platform produced significant improvements from pre to post for depression (P=.001), reappraisal (P<.001), and perseverative thinking (P<.001). The expressive writing platform yielded significant pre to post improvements for depression (P=.02) and perseverative thinking (P<.001), but not reappraisal (P=.45). The two groups did not diverge significantly at post-test on measures of depression or perseverative thinking, though Panoply users had significantly higher reappraisal scores (P=.02) than expressive writing. We also found significant group by treatment interactions. Individuals with elevated depression symptoms showed greater comparative benefit from Panoply for depression (P=.02) and perseverative thinking (P=.008). Individuals with baseline reappraisal deficits showed greater comparative benefit from Panoply for depression (P=.002) and perseverative thinking (P=.002). Changes in reappraisal mediated the effects of Panoply, but not the expressive writing platform, for both outcomes of depression (ab=-1.04, SE 0.58, 95% CI -2.67 to -.12) and perseverative thinking (ab=-1.02, SE 0.61, 95% CI -2.88 to -.20). Dropout rates were similar for the two platforms; however, Panoply yielded significantly more usage activity (P<.001) and significantly greater user experience scores (P<.001).ConclusionsPanoply engaged its users and was especially helpful for depressed individuals and for those who might ordinarily underutilize reappraisal techniques. Further investigation is needed to examine the long-term effects of such a platform and whether the...
BackgroundConversational agents cannot yet express empathy in nuanced ways that account for the unique circumstances of the user. Agents that possess this faculty could be used to enhance digital mental health interventions.ObjectiveWe sought to design a conversational agent that could express empathic support in ways that might approach, or even match, human capabilities. Another aim was to assess how users might appraise such a system.MethodsOur system used a corpus-based approach to simulate expressed empathy. Responses from an existing pool of online peer support data were repurposed by the agent and presented to the user. Information retrieval techniques and word embeddings were used to select historical responses that best matched a user’s concerns. We collected ratings from 37,169 users to evaluate the system. Additionally, we conducted a controlled experiment (N=1284) to test whether the alleged source of a response (human or machine) might change user perceptions.ResultsThe majority of responses created by the agent (2986/3770, 79.20%) were deemed acceptable by users. However, users significantly preferred the efforts of their peers (P<.001). This effect was maintained in a controlled study (P=.02), even when the only difference in responses was whether they were framed as coming from a human or a machine.ConclusionsOur system illustrates a novel way for machines to construct nuanced and personalized empathic utterances. However, the design had significant limitations and further research is needed to make this approach viable. Our controlled study suggests that even in ideal conditions, nonhuman agents may struggle to express empathy as well as humans. The ethical implications of empathic agents, as well as their potential iatrogenic effects, are also discussed.
Although much research considers how individuals manage their own emotions, less is known about the emotional benefits of regulating the emotions of others. We examined this topic in a 3-week study of an online platform providing training and practice in the social regulation of emotion. We found that participants who engaged more by helping others (vs. sharing and receiving support for their own problems) showed greater decreases in depression, mediated by increased use of reappraisal in daily life. Moreover, social regulation messages with more other-focused language (i.e., second-person pronouns) were (a) more likely to elicit expressions of gratitude from recipients and (b) predictive of increased use of reappraisal over time for message composers, suggesting perspective-taking enhances the benefits of practicing social regulation. These findings unpack potential mechanisms of socially oriented training in emotion regulation and suggest that by helping others regulate, we may enhance our own regulatory skills and emotional well-being.
Objective: Mental illness is a leading cause of disease burden; however, many barriers prevent people from seeking mental health services. Technological innovations may improve our ability to reach underserved populations by overcoming many existing barriers. We evaluated a brief, automated risk assessment and intervention platform designed to increase the use of crisis resources provided to those online and in crisis. Method: Participants, users of the digital mental health app Koko, were randomly assigned to treatment or control conditions upon accessing the app and were included in the study after their posts were identified by machine learning classifiers as signaling a current mental health crisis. Participants in the treatment condition received a brief Barrier Reduction Intervention (BRI) designed to increase the use of crisis service referrals provided on the app. Participants were followed up several hours later to assess the use of crisis services. Results: Only about one quarter of participants in a crisis (21.8%) reported being "very likely" to use clinical referrals provided to them, with the most commonly endorsed barriers being they "just want to chat" or their "thoughts are too intense." Among participants providing follow-up data (41.3%), receipt of the BRI was associated with a 23% increase in the use of crisis services. Conclusion: These findings suggest that a brief, automated BRI can be efficacious on digital platforms, even among individuals experiencing acute psychological distress. The potential to increase help seeking and service utilization with such procedures holds promise for those in need of psychiatric services. Trial Registration: clinicaltrials.gov identifier: NCT03633825. What is the public health significance of this article?This study provides evidence that a brief, automated barrier reduction procedure increased the rate of service utilization among individuals in acute distress on digital platforms. These findings suggest that automated procedures may have the potential to increase help-seeking behavior among those in need of mental health services on a large scale.
Crowdsourcing complex creative tasks remains difficult, in part because these tasks require skilled workers. Most crowdsourcing platforms do not help workers acquire the skills nec essary to accomplish complex creative tasks. In this paper, we describe a platform that combines learning and crowdsourc ing to benefit both the workers and the requesters. Workers gain new skills through interactive step-by-step tutorials and test their knowledge by improving real-world images submit ted by requesters. In a series of three deployments spanning two years, we varied the design of our platform to enhance the learning experience and improve the quality of the crowd work. We tested our approach in the context of LevelUp for Photoshop, which teaches people how to do basic photograph improvement tasks using Adobe Photoshop. We found that by using our system workers gained new skills and produced high-quality edits for requested images, even if they had little prior experience editing images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.