2021
DOI: 10.31234/osf.io/8vyxr
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Disagree? You Must be a Bot! How Beliefs Shape Twitter Profile Perceptions

Abstract: In this paper, we investigate the human ability to distinguish political social bots from humans on Twitter. Following motivated reasoning theory from social and cognitive psychology, our central hypothesis is that especially those accounts which are opinion-incongruent are perceived as social bot accounts when the account is ambiguous about its nature. We also hypothesize that credibility ratings mediate this relationship. We asked N = 151 participants to evaluate 24 Twitter accounts and decide whether the ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…The study where these accounts were collected referred to them as "content polluters" and did not claim that these accounts were automated or bots [13]. Many of the accounts from other sources were apparently labeled manually by laypersons with little understanding of the state-of-the-art in human-machine interaction and the difficulty of evading Twitter's detection of nefarious platform use, and based on a naïve understanding of what constitutes a "bot" (possibly based on questionable clues like a high amount of retweets, a small or large number of followers, missing profile picture, digits in the Twitter handle, or, as empirically validated in [17], opposing political views). Some accounts in the "bot repository" were explicitly labeled as "bots" because they appeared to have participated in "follow trains", a technique used by human political activists on Twitter to rapidly increase their follower count.…”
Section: Theoretical and Methodological Limitations Of Botometer-base...mentioning
confidence: 99%
“…The study where these accounts were collected referred to them as "content polluters" and did not claim that these accounts were automated or bots [13]. Many of the accounts from other sources were apparently labeled manually by laypersons with little understanding of the state-of-the-art in human-machine interaction and the difficulty of evading Twitter's detection of nefarious platform use, and based on a naïve understanding of what constitutes a "bot" (possibly based on questionable clues like a high amount of retweets, a small or large number of followers, missing profile picture, digits in the Twitter handle, or, as empirically validated in [17], opposing political views). Some accounts in the "bot repository" were explicitly labeled as "bots" because they appeared to have participated in "follow trains", a technique used by human political activists on Twitter to rapidly increase their follower count.…”
Section: Theoretical and Methodological Limitations Of Botometer-base...mentioning
confidence: 99%
“…Thus, bots could readily be accepted as parts of the social system. Known presence of bots in the system could make the interaction (even if confrontational) with bots more acceptable [26]. Research studying differences in interaction patterns across human-to-human and human-to-bot interactions reveals striking similarities across the two setups [27] indicating particular ease in treating automated agents similar to human users.…”
Section: Decisions Before and After Interactingmentioning
confidence: 99%
“…This ambiguity provides social media users with a scapegoat for their unpleasant online experiences (Halperin, 2021). For example, one may reject accounts with opposing political views by labeling them as bots (Wischnewski et al, 2021;Yan et al, 2021). Such a confirmation bias is only one of many perceptual biases on which users rely when making judgments about online interactions (Hills, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…The biases of annotators can therefore propagate through the pipeline and affect downstream tasks. As a few studies have already revealed perceptual biases in human-bot interactions (Wischnewski et al, 2021;Yan et al, 2021), more research is needed.…”
Section: Introductionmentioning
confidence: 99%