2017
DOI: 10.1145/3126492
|View full text |Cite
|
Sign up to set email alerts
|

Is that social bot behaving unethically?

Abstract: A procedure for reflection and discourse on the behavior of bots in the context of law, deception, and societal norms.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(12 citation statements)
references
References 2 publications
0
12
0
Order By: Relevance
“…In [24], for instance, the authors investigate how to handle race-talks with a chatbot from a socio-technical perspective. AIbased applications require a wider and deeper discussion on ethical aspects, as initially addressed in [23].…”
Section: Towards Trustfulnessmentioning
confidence: 99%
“…In [24], for instance, the authors investigate how to handle race-talks with a chatbot from a socio-technical perspective. AIbased applications require a wider and deeper discussion on ethical aspects, as initially addressed in [23].…”
Section: Towards Trustfulnessmentioning
confidence: 99%
“…In the misinformation related literature, social bots on Twitter are typically associated with having unethical behaviour, being misinformation spreaders [4,12], or content polluters [10,15]. They are not yet seen as a possible agent active in this battle to restrain misinformation spread, nor connected with a conversational agent.…”
Section: Social Bots Designmentioning
confidence: 99%
“…To this end, we introduce a set of two Twitter agents that, in complementary ways, approach Twitter users that have shared verified misinformation, and invite them to follow and interact with a conversational agent that explains the credibility assessment of tweets or news. The first agent is a social bot, as defined in [12] computer algorithms that can share content and connect with users on social media; the second is a conversational agent that, for the moment, verifies whether pieces of news or tweets have already been fact-checked and briefly explains the assessment. As an ongoing research, the conversational agent is evolving to provide deeper explanations of credibility indicators used on fact-checking.…”
Section: Introductionmentioning
confidence: 99%
“…To date, propaganda has begun to rise exponentially and the global effect has risen by 150% for the past two years according to the Computational Propaganda Research Project (COMPROP) [6]- [8]. The social bots that reside in social media are also believed to be one of the major factors that contributed to the propaganda dispersion in 2017, where approximately 23 million social bots were found in Twitter accounts [9]. The spread of propaganda has not only brought about disruption in global politics but has also caused cyber-hate, riots, threats, and even numerous instances of violence.…”
Section: Introductionmentioning
confidence: 99%