2021
DOI: 10.24251/hicss.2021.541
|View full text |Cite
|
Sign up to set email alerts
|

Is Making Mistakes Human? On the Perception of Typing Errors in Chatbot Communication

Abstract: The increasing application of Conversational Agents (CAs) changes the way customers and businesses interact during a service encounter. Research has shown that CA equipped with social cues (e.g., having a name, greeting users) stimulates the user to perceive the interaction as human-like, which can positively influence the overall experience. Specifically, social cues have shown to lead to increased customer satisfaction, perceived service quality, and trustworthiness in service encounters. However, many CAs a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 26 publications
(58 reference statements)
0
4
0
Order By: Relevance
“…These findings point to a different standard in terms of expectations when interacting with a human or a chatbot. Previous work has found similar gaps in expectations for interactions with humans versus technology (such as conversational agents), especially in terms of capability and intelligence [57][58][59]. One study by Grimes et al [60] framed this in terms of expectancy violation theory, which posits that when expectations for interaction are violated by one of the participants, it can lead to either positive or negative effects on outcomes such as attraction, credibility, persuasion, and smoothness of interactions depending on the direction of the violation [61].…”
Section: Discussionmentioning
confidence: 73%
“…These findings point to a different standard in terms of expectations when interacting with a human or a chatbot. Previous work has found similar gaps in expectations for interactions with humans versus technology (such as conversational agents), especially in terms of capability and intelligence [57][58][59]. One study by Grimes et al [60] framed this in terms of expectancy violation theory, which posits that when expectations for interaction are violated by one of the participants, it can lead to either positive or negative effects on outcomes such as attraction, credibility, persuasion, and smoothness of interactions depending on the direction of the violation [61].…”
Section: Discussionmentioning
confidence: 73%
“…On the one hand, a non-gender-matching name and avatar can be perceived as an error by the developer (Mozafari et al, 2022). Hence, the general perception of a flawed CA decreases compared to a flawless one (e.g., the perception of humanness or service satisfaction (Bührke et al, 2021;Riquel et al, 2021)), and therefore the effect may have originated here. On the other hand, the perception of gender may also correlate with the gender of each participant (here 67% were female) (Marecek, 1995) or with their attitude towards sexism (i.e., is a stereotypical gender expected, and is there a general rejection of everything else) (Swim & Hyers, 2010).…”
Section: Resultsmentioning
confidence: 99%
“…Thus, the interaction becomes more familiar. On the other hand, people could perceive a gender-mixed CA as a mistake of the developer (Mozafari et al, 2022), which negatively influences their perception (Bührke et al, 2021). For practitioners, it is recommended to assign CAs a binary gender with a corresponding binary name and avatar.…”
Section: Discussionmentioning
confidence: 99%
“…For example, prior work has found that adults prefer and anthropomorphize robots that make social errors (e.g., not following the rules, incongruent gestures, cheating; Mirnig et al, 2017; Salem et al, 2013; Short et al, 2010) or provide further social cues after the error (e.g., giving an apology; Lee et al, 2010). However, adults do not anthropomorphize technologies that make technical errors (e.g., typos; Bührke et al, 2021; Westerman et al, 2019). It remains an open question as to whether children are sensitive to these two types of errors, but we suspect that children regularly encounter these technical errors with the technologies in their homes more so than social errors.…”
Section: Discussionmentioning
confidence: 99%