2019 IEEE 21st Conference on Business Informatics (CBI) 2019
DOI: 10.1109/cbi.2019.00056
|View full text |Cite
|
Sign up to set email alerts
|

Effect of Transparency and Trust on Acceptance of Automatic Online Comment Moderation Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 29 publications
1
7
0
Order By: Relevance
“…Furthermore, this supplementary information may implicitly increase the perception of transparency in the model and positively affect user confidence. This interpretation aligns with (Brunk et al, 2019), who showed that the interpretability of black-box algorithms is perceived to be more transparent and trustworthy if additional information is present. However, the adjacency comparisons (Figure 5) indicate no significant difference between levels of MC when explanation data are presented adjacently.…”
Section: The Conditional Effect Of Morphological Clarity On User Conf...supporting
confidence: 88%
“…Furthermore, this supplementary information may implicitly increase the perception of transparency in the model and positively affect user confidence. This interpretation aligns with (Brunk et al, 2019), who showed that the interpretability of black-box algorithms is perceived to be more transparent and trustworthy if additional information is present. However, the adjacency comparisons (Figure 5) indicate no significant difference between levels of MC when explanation data are presented adjacently.…”
Section: The Conditional Effect Of Morphological Clarity On User Conf...supporting
confidence: 88%
“…Additionally, trust moderated the relationship between FAT and user satisfaction in their study. Other research found that transparency of an automatic moderation system is a prerequisite for trust in that system (Brunk et al, 2019). Yet, an important question is whether trust and perceived fairness of the moderation decision vary by type of moderation source (AI vs. user-based) and its visibility.…”
Section: Sources Of Moderationmentioning
confidence: 99%
“…Commenters openly rebuked TikTok for sharing educational materials addressing fake news while failing to curb the spread of misinformation, which resulted in a diminished level of trust. Providing explicit reasoning around content moderation practices promotes a higher level of user trust (Brunk, et al, 2019), which might help with the effectiveness of such educational initiatives. Platforms should aim for increased transparency in communicating how decisions are made regarding user reports of misinformation or content violations.…”
Section: Implications and Recommendationsmentioning
confidence: 99%