Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376843
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Persuasive Dialogues: Testing Bot Identities and Inquiry Strategies

Abstract: Intelligent conversational agents, or chatbots, can take on various identities and are increasingly engaging in more humancentered conversations with persuasive goals. However, little is known about how identities and inquiry strategies influence the conversation's effectiveness. We conducted an online study involving 790 participants to be persuaded by a chatbot for charity donation. We designed a two by four factorial experiment (two chatbot identities and four inquiry strategies) where participants were ran… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
43
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 57 publications
(44 citation statements)
references
References 49 publications
1
43
0
Order By: Relevance
“…Deciding what name to call the chatbot and whether to frame it as a human peer or as a transparent bot system requires careful consideration. Our recent work [ 52 ] suggests that as AI chatbots are quickly adopting human conversational capacities, the perceived identity of a chatbot has significant effects on the persuasion outcome and interpersonal perceptions. Furthermore, our study findings suggest that users respond better if the chatbot’s identity is clearly presented.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Deciding what name to call the chatbot and whether to frame it as a human peer or as a transparent bot system requires careful consideration. Our recent work [ 52 ] suggests that as AI chatbots are quickly adopting human conversational capacities, the perceived identity of a chatbot has significant effects on the persuasion outcome and interpersonal perceptions. Furthermore, our study findings suggest that users respond better if the chatbot’s identity is clearly presented.…”
Section: Resultsmentioning
confidence: 99%
“…This may be because users can develop more agency and control if they know how to respond to the conversational partner by applying different communication norms. For instance, if a chatbot is presented with a human identity and tries to imitate human inquiries by asking personal questions, the UVE can be elicited and make people feel uncomfortable [ 52 ]. However, contrary findings have also been identified as some studies show evidence that people respond well and disclose more personal information if the chatbot is presented as a bot and can also display emotions [ 60 , 61 ].…”
Section: Resultsmentioning
confidence: 99%
“…Research on the repercussions of chatbot disclosure is still at a nascent stage. Pioneering empirical studies have focused on understanding the effect of disclosing vs. not disclosing the chatbot's identity to users and arrived at the conclusion that transparently communicating chatbot identity comes at the cost of negative user reactions: it may reduce customer retention [20], user acceptance [21], duration of interaction and purchase rate [17], efficiency of humanmachine cooperation [13], perceived social presence and humanness [12], and persuasion efficiency [25]. These results are startling, as negative biases to disclosed bots emerge despite equal performance levels of disclosed and undisclosed bots and superiority of examined bots over humans.…”
Section: Related Work On Chatbot Disclosurementioning
confidence: 99%
“…Studies on chatbot disclosure show that this effect prevails not only when comparing algorithms to humans, but disclosed algorithms to undisclosed algorithms. This implies that not the actual, but perceived identity impacts trust [25].…”
Section: Trust In Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation