2019
DOI: 10.2196/12235
|View full text |Cite
|
Sign up to set email alerts
|

Do Search Engine Helpline Notices Aid in Preventing Suicide? Analysis of Archival Data

Abstract: Background Search engines display helpline notices when people query for suicide-related information. Objective In this study, we aimed to examine if these notices and other information displayed in response to suicide-related queries are correlated with subsequent searches for suicide prevention rather than harmful information. Methods Anonymous suicide-related searches made on Bing and Google in the United States, the United Kingdom, Hong K… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…The 2019 Summit explored several new challenges and opportunities that are shaping both the contours and dynamics of the digital health communication space. These included showcasing emergent research on the promising use of Internet search engines and data mining to prevent or diagnose diseases (e.g., Cheng & Yom-Tov, 2019;Hochberg, Daoud, Shehadeh, & Yom-Tov, 2019;Ofran, Paltiel, Pelleg, Rowe, & Yom-Tov, 2012); understanding the role of stigma, misinformation, misconception, and autonomous Internet actors ("bots") that have capacity to distort, controversialize, and undermine public perceptions of public health issues, such as vaccine safety and vaccine hesitancy (e.g., Broniatowski et al, 2018;Dredze, Broniatowski, Smith, & Hilyard, 2016;Fraser, 2019;Jamison, Broniatowski, & Quinn, 2019); understanding the increased use and impact of visuals on health perceptions and the variations in tone, stance, and accuracy of information available at moderated versus unmoderated public websites (e.g., Shoup et al, 2019); and illuminating the increasing challenges to protecting public health presented by what are commonly referred to as the "Dark Net" and "Deep Web" (e.g., Carson, 2018;Mackey, 2018;McCormick, 2014).…”
Section: Smentioning
confidence: 99%
“…The 2019 Summit explored several new challenges and opportunities that are shaping both the contours and dynamics of the digital health communication space. These included showcasing emergent research on the promising use of Internet search engines and data mining to prevent or diagnose diseases (e.g., Cheng & Yom-Tov, 2019;Hochberg, Daoud, Shehadeh, & Yom-Tov, 2019;Ofran, Paltiel, Pelleg, Rowe, & Yom-Tov, 2012); understanding the role of stigma, misinformation, misconception, and autonomous Internet actors ("bots") that have capacity to distort, controversialize, and undermine public perceptions of public health issues, such as vaccine safety and vaccine hesitancy (e.g., Broniatowski et al, 2018;Dredze, Broniatowski, Smith, & Hilyard, 2016;Fraser, 2019;Jamison, Broniatowski, & Quinn, 2019); understanding the increased use and impact of visuals on health perceptions and the variations in tone, stance, and accuracy of information available at moderated versus unmoderated public websites (e.g., Shoup et al, 2019); and illuminating the increasing challenges to protecting public health presented by what are commonly referred to as the "Dark Net" and "Deep Web" (e.g., Carson, 2018;Mackey, 2018;McCormick, 2014).…”
Section: Smentioning
confidence: 99%
“…The majority of the included studies were descriptive in nature; exploring the nature of the content found online (51 studies), describing user knowledge (2 studies) or exploring user experiences (28 studies). There were only six experimental studies (An and Lee, 2019; Cheng and Yom-Tov, 2019; Corbitt-Hall et al , 2016; Corbitt-Hall et al , 2019; Lewis et al , 2018; Till et al , 2017). Only four of the included studies had a longitudinal element (Arendt et al , 2019; Cheng and Yom-Tov, 2019; Scherr and Reinemann, 2016; Till et al , 2017).…”
Section: Resultsmentioning
confidence: 99%
“…There were only six experimental studies (An and Lee, 2019; Cheng and Yom-Tov, 2019; Corbitt-Hall et al , 2016; Corbitt-Hall et al , 2019; Lewis et al , 2018; Till et al , 2017). Only four of the included studies had a longitudinal element (Arendt et al , 2019; Cheng and Yom-Tov, 2019; Scherr and Reinemann, 2016; Till et al , 2017). Of the 36 studies focussed on users rather than content, 16 of these were in populations with direct experience of self-harm or thoughts of self-harm.…”
Section: Resultsmentioning
confidence: 99%
“…And while most content moderation is not a crisis, on occasion it can be: one young man recently committed suicide after an account suspension prevented him from conducting business and repeated attempts to appeal the decision failed [94]. Platforms already provide mental health support in specific contexts like searches for self harm [73], eating disorders [45], and suicide [19]. Providing emotional support could one small way to begin to address users' calls for additional compassion in the design of content moderation.…”
Section: Compassionmentioning
confidence: 99%