Recently, an increasing number of safety organizations in the U.S. have incorporated text-based risk reporting systems to respond to safety incident reports from their community members. To gain a better understanding of the interaction between community members and dispatchers using text-based risk reporting systems, this study conducts a system log analysis of LiveSafe, a community safety reporting system, to provide empirical evidence of the conversational patterns between users and dispatchers using both quantitative and qualitative methods. We created an ontology to capture information (e.g., location, attacker, target, weapon, start-time, and end-time, etc.) that dispatchers often collected from users regarding their incident tips. Applying the proposed ontology, we found that dispatchers often asked users for different information across varied event types (e.g., Attacker for Abuse and Attack events, Target for Harassment events). Additionally, using emotion detection and regression analysis, we found an inconsistency in dispatchers' emotional support and responsiveness to users' messages between different organizations and between incident categories. The results also showed that users had a higher response rate and responded quicker when dispatchers provided emotional support. These novel findings brought significant insights to both practitioners and system designers, e.g., AI-based solutions to augment human agents' skills for improved service quality.