A discussion forum is a valuable tool to support student learning in online contexts. However, interactions in online discussion forums are sparse, leading to other issues such as low engagement and dropping out. Recent educational studies have examined the affordances of conversational agents (CA) powered by artificial intelligence (AI) to automatically support student participation in discussion forums. However, few studies have paid attention to the safety of CAs. This study aimed to address the safety challenges of CAs constructed with educational big data to support learning. Specifically, we proposed a safety‐aware CA model, benchmarked with two state‐of‐the‐art (SOTA) models, to support high school student learning in an online algebra learning platform. We applied automatic text analysis to evaluate the safety and socio‐emotional support levels of CA‐generated and human‐generated texts. A large dataset was used to train and evaluate the CA models, which consisted of all discussion post‐reply pairs (n = 2,097,139) by 71,918 online math learners from 2015 to 2021. Results show that while SOTA models can generate supportive texts, their safety is compromised. Meanwhile, our proposed model can effectively enhance the safety of generated texts while providing comparable support.
What is already known about this topic
Online discussion forums have been plagued by a lack of interaction among students due to factors such as expectations to receive no response and perceptions of topic irrelevance which lead to low motivation to participate.
AI‐based conversational agents can automatically support students' interactions in online discussion forums at a large scale, and their generated responses can be human‐like, contextually coherent and socio‐emotionally supportive.
Unsafe discourse exchanges between students and conversational agents can be dangerous as identity attacks, aggravation and bullying behaviours embedded in discourses can disrupt students' knowledge inquiry and negatively influence student motivation and engagement. However, few educational studies have paid attention to the safety of conversational agents.
What this paper adds
This study proposes and synthesized strategies to build AI‐based conversational agents that automatically support online discussions with safe and supportive discourses.
This study reveals the relationship between discourse safety and social support, suggesting supportive discourses can also be unsafe.
This study enriches the literature on educational conversational agents by synthesizing a conceptual framework on discourse safety and social support, and by proposing concrete algorithmic strategies to improve the safety of conversational agents.
Implications for practice and/or policy
Researchers and practitioners can adopt strategies in this study such as generation control, open‐sourced models and public API services to evaluate students' discourse safety for early intervention or modify existing conversational agents to be safety‐aware.
Practitioners can utilize...