2022
DOI: 10.1177/14614448211072307
|View full text |Cite
|
Sign up to set email alerts
|

“I agree with you, bot!” How users (dis)engage with social bots on Twitter

Abstract: This article investigates under which conditions users on Twitter engage with or react to social bots. Based on insights from human–computer interaction and motivated reasoning, we hypothesize that (1) users are more likely to engage with human-like social bot accounts and (2) users are more likely to engage with social bots which promote content congruent to the user’s partisanship. In a preregistered 3 × 2 within-subject experiment, we asked N = 223 US Americans to indicate whether they would engage with or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 45 publications
2
6
0
Order By: Relevance
“…In short, the account @UAWeapons utilized hashtags and videos extensively in order to increase its popularity. Furthermore, it was argued that tweets with a clear stance and congruent with the retweeter's perspective are more likely to be disseminated (Wischnewski et al, 2022), which was also confirmed in this study. Despite posting clearly pro-Ukrainian-and anti-Russian content, this account tried to portray itself as a neutral information provider via its profile and hashtag (e.g., the most commonly used hashtag #Ukraine).…”
Section: Discussionsupporting
confidence: 83%
See 1 more Smart Citation
“…In short, the account @UAWeapons utilized hashtags and videos extensively in order to increase its popularity. Furthermore, it was argued that tweets with a clear stance and congruent with the retweeter's perspective are more likely to be disseminated (Wischnewski et al, 2022), which was also confirmed in this study. Despite posting clearly pro-Ukrainian-and anti-Russian content, this account tried to portray itself as a neutral information provider via its profile and hashtag (e.g., the most commonly used hashtag #Ukraine).…”
Section: Discussionsupporting
confidence: 83%
“…Second, in the case of the information warfare between Russia and Ukraine, it seems doubtful that tweets with a clear stance (Luceri et al, 2019) and that are congruent with the retweeter’s perspective are more likely to be disseminated (Duan et al, 2022; Wischnewski et al, 2022). Politically inclined individuals may be skeptical of neutral news messages that disagree with their views (Hameleers, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…However, contrary to previous findings suggesting that partisans deliberately share any online content that hurts political opponents (Osmundsen et al, 2021), in our study, participants did care whether the profiles they engage with were operated by humans or by bots. Both, Democrats and Republicans favoured engagement with human-driven profiles, hence supporting the idea that human-bot interactions form largely inadvertently (cf., Wischnewski et al, 2022).…”
Section: Brief Discussionmentioning
confidence: 99%
“…Social bots, for example, are AI-powered applications that automatically produce content and interact with humans (Ferrara et al , 2016). Aside from automating routine commercial tasks such as customer service, bots can be used in corporate PR communication (Wischnewski et al , 2024) and to influence public opinion and attitudes, as illustrated in the political science literature (Godulla et al , 2021). Bots can leave comments, engage in conversations and distribute likes in social media on an automatic and semi-autonomous basis.…”
Section: Indirect Lobbying: Algorithmic and Participatory Public Rela...mentioning
confidence: 99%