This study explored the effect of chatbot emotional disclosure on user satisfaction and reuse intention for a chatbot counseling service. It also examined the independent and sequential mediation roles of user emotional disclosure intention and perceived intimacy with a chatbot on the relationship between chatbot emotional disclosure, user satisfaction, and reuse intention for chatbot counseling. In total, 348 American adults were recruited to participate in a mental health counseling session with either of the two types of artificial intelligence-powered mental health counseling chatbots. These included a chatbot disclosing factual information only or a chatbot disclosing humanlike emotions. The results revealed that chatbot emotional disclosure significantly increased user satisfaction and reuse intention for a chatbot counseling service. The results further revealed that user emotional disclosure intention and perceived intimacy with a chatbot independently and serially mediates the effect of chatbot emotional disclosure on user satisfaction and chatbot counseling service reuse intention. The results indicate positive effects of artificial emotions and their disclosure in the context of chatbot moderated mental health counseling. Practical implications and psychological mechanisms are discussed.
This study examined a serial mediation mechanism to test the effect of chatbots’ human representation on the intention to comply with health recommendations through psychological distance and trust towards the chatbot counselor. The sample of the study comprised 385 adults from the USA. Two artificial intelligence chatbots either with human or machine-like representation were developed. Participants had a short conversation with either of the chatbots to simulate an online mental health counseling session and reported their experience in an online survey. The results showed that participants in the human representation condition reported a higher intention to comply with chatbot-generated mental health recommendations than those in the machine-like representation condition. Furthermore, the results supported that both psychological distance and perceived trust towards the chatbot mediated the relationship between human representation and compliance intention, respectively. The serial mediation through psychological distance and trust in the relationship between human representation and compliance intention was also supported. These findings provide practical guidance for healthcare chatbot developers and theoretical implications for human-computer interaction research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.