Objective: Digital monitoring technologies (e.g., smartphones and wearable devices) provide unprecedented opportunities to study potentially harmful behaviors such as suicide, violence, and alcohol/substance use in realtime. The use of these new technologies has the potential to significantly advance the understanding, prediction, and prevention of these behaviors. However, such technologies also introduce myriad ethical and safety concerns, such as deciding when and how to intervene if a participant's responses indicate elevated risk during the study?Methods: We used a modified Delphi process to develop a consensus among a diverse panel of experts on the ethical and safety practices for conducting digital monitoring studies with those at risk for suicide and related behaviors. Twenty-four experts including scientists, clinicians, ethicists, legal experts, and those with lived experience provided input into an iterative, multi-stage survey, and discussion process.Results: Consensus was reached on multiple aspects of such studies, including: inclusion criteria, informed consent elements, technical and safety procedures, data review practices during the study, responding to various levels of participant risk in real-time, and data and safety monitoring.Conclusions: This consensus statement provides guidance for researchers, funding agencies, and institutional review boards regarding expert views on current best practices for conducting digital monitoring studies with those at risk for suicide-with relevance to the study of a range of other potentially harmful behaviors (e.g., alcohol/substance use and violence). This statement also highlights areas in which more data are needed before consensus can be reached regarding best ethical and safety practices for digital monitoring studies.
Suicide researchers commonly use a variety of assessment methods (e.g., surveys and interviews) to enroll participants into studies and assign them to study conditions. However, prior studies suggest that different assessment methods and items may yield different responses from participants. This study examines potential inconsistencies in participants' reports of suicidal ideation (SI) and suicide attempt (SA) across commonly used assessment methods: phone screen interview, in-person interview, self-report survey, and confidential exit survey. To test the reliability of the effects, we replicated the study across two samples. In both samples, we observed a notable degree of inconsistent reporting. Importantly, the highest endorsement rates for SI/SA were on a confidential exit survey. Follow-up assessments and analyses did not provide strong support for the roles of purposeful inaccuracy, random responding, or differences in participant experiences/conceptualizations of SI. Although the reasons for such inconsistencies remain inconclusive, results suggest that classification of suicidal/control participants that uses multiple items to capture a single construct, that uses a Graded Scale to capture a broad spectrum of thoughts and behaviors, and that takes into account consistency of responding across such items may provide clearer and more homogenous groups and is recommended for future study. Public Significance StatementPeople are inconsistent in their reports of SI and SA across different methods of assessment, with the highest rates of endorsement on more anonymous measures, even in spite of apparent efforts to provide accurate information. These findings have important implications for both researchers and clinicians whose work relies on accurately identifying individuals impacted by suicidal thoughts and behaviors.
In response to the coronavirus disease 2019 (COVID-19) pandemic, federal, state, and local governments in the United States implemented restrictions on in-person gatherings and provided recommendations for minimum distance between individuals to minimize the spread of severe acute respiratory syndrome coronavirus 2. These restrictions necessitated an unprecedented scaling up of telehealth services across the health care system, including in mental health and substance use disorder care. The learning curve for clinicians-many of whom had no prior experience with telehealth -has been steep. The rapid shift to remote services required adjusting to technical and clinical challenges as services were being provided. The lessons learned during this time have potential to continue to inform telehealth services, even after the acute need for social distancing has abated. In this article, we aim to share some of our lessons learned during this period from providing group-based cognitive-behavioral therapy. We discuss both technical and clinical challenges in conducting remote cognitive-behavioral groups via videoconferencing software, as well as successes and failures in adjusting to these challenges. Clinical Impact StatementThis article provides tangible technical and clinical recommendations for providing group-based cognitive behavioral therapy (CBT) using videoconferencing during the coronavirus disease 2019 (COVID-19) pandemic. Many experiential and didactic exercises can translate well, with modifications, to an online CBT group.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.