BACKGROUND
Empathy is a driving force in our connection to others, our mental wellbeing, and resilience to challenges. With the rise of generative AI systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds towards stories from human vs AI narrators and how user emotions might change when the author of a story is made transparent to users.
OBJECTIVE
We aim to understand how empathy shifts across human-written vs AI-written stories, and how these findings inform ethical implications and human-centered design of using mental health chatbots as objects of empathy.
METHODS
We conduct crowd-sourced studies with N=985 participants who each write a personal story and then rate empathy towards 2 retrieved stories, where one is written by a language model, and another is written by a human. Our studies vary transparency around whether a story is written by a human or an AI to see how transparency affects empathy towards the narrator. We conduct mixed-methods analyses with both quantitative and qualitative approaches to understand how and why transparency affects empathy towards human vs AI storytellers.
RESULTS
We find that participants consistently and significantly empathize with human-written over machine-written stories in almost all conditions, regardless of whether they are aware that an AI wrote the story (P<.001). We also find that participants reported a greater willingness to empathize with AI-written stories if there is transparency about the story author (P<.001).
CONCLUSIONS
Our work sheds light on how empathy towards AI or human narrators is tied to the way the text is presented, thus informing ethical considerations of artificial social support or mental health chatbots that are intended to evoke empathetic reactions.