An increasing number of safety departments in organizations across the U.S. are offering mobile apps that allow their local community members to report potential risks, such as hazards, suspicious events, ongoing incidents, and crimes. These "community-sourced risk'' systems are designed for the safety departments to take action to prevent or reduce the severity of situations that may harm the community. However, little is known about the actual use of such community-sourced risk systems from the perspective of both community members and the safety departments. This study is the first large-scale empirical analysis of community-sourced risk systems. More specifically, we conducted a comprehensive system log analysis of LiveSafe--a community-sourced risk system--that has been used by more than two hundred universities and colleges. Our findings revealed a mismatch between what the safety departments expected to receive and what their community members actually reported, and identified several factors (e.g., anonymity, organization, and tip type) that were associated with the safety departments' responses to their members' tips. Our findings provide design implications for chatbot-enabled community-risk systems and make practical contributions for safety organizations and practitioners to improve community engagement.
Recently, an increasing number of safety organizations in the U.S. have incorporated text-based risk reporting systems to respond to safety incident reports from their community members. To gain a better understanding of the interaction between community members and dispatchers using text-based risk reporting systems, this study conducts a system log analysis ofLiveSafe, a community safety reporting system, to provide empirical evidence of the conversational patterns between users and dispatchers using both quantitative and qualitative methods. We created an ontology to capture information (e.g., location, attacker, target, weapon, start-time, and end-time, etc.) that dispatchers often collected from users regarding their incident tips. Applying the proposed ontology, we found that dispatchers often asked users for different information across varied event types (e.g.,Attacker forAbuse andAttack events,Target forHarassment events). Additionally, using emotion detection and regression analysis, we found an inconsistency in dispatchers' emotional support and responsiveness to users' messages between different organizations and between incident categories. The results also showed that users had a higher response rate and responded quicker when dispatchers provided emotional support. These novel findings brought significant insights to both practitioners and system designers, e.g., AI-based solutions to augment human agents' skills for improved service quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.