2021
DOI: 10.1075/is.20002.kei
|View full text |Cite
|
Sign up to set email alerts
|

What’s to bullying a bot?

Abstract: In human-chatbot interaction, users casually and regularly offend and abuse the chatbot they are interacting with. The current paper explores the relationship between chatbot humanlikeness on the one hand and sexual advances and verbal aggression by the user on the other hand. 283 conversations between the Cleverbot chatbot and its users were harvested and analysed. Our results showed higher counts of user verbal aggression and sexual comments towards Cleverbot when Cleverbot appeared more humanlike in its beh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 41 publications
0
1
0
Order By: Relevance
“…Given the importance of morality in social interaction, designers may want to implement such features in AIs only when they aim to mimic human-human interaction. By increasing moral consideration, designing AIs with human-like bodies and prosociality could also help solve the problem of people being abusive towards AIs [2,51], which can cause expensive damage and dangerous situations for bystanders, though further research should be conducted on this question because human-likeness in AIs has also been found to be associated with greater levels of abuse [35]. Additionally, Schwitzgebel and Garza [62] argue that we should design AI systems that evoke reactions that reflect their true moral status (i.e., how much they matter morally, for their own sake).…”
Section: Discussionmentioning
confidence: 99%
“…Given the importance of morality in social interaction, designers may want to implement such features in AIs only when they aim to mimic human-human interaction. By increasing moral consideration, designing AIs with human-like bodies and prosociality could also help solve the problem of people being abusive towards AIs [2,51], which can cause expensive damage and dangerous situations for bystanders, though further research should be conducted on this question because human-likeness in AIs has also been found to be associated with greater levels of abuse [35]. Additionally, Schwitzgebel and Garza [62] argue that we should design AI systems that evoke reactions that reflect their true moral status (i.e., how much they matter morally, for their own sake).…”
Section: Discussionmentioning
confidence: 99%