Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544548.3581219
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Use of Personalized AI for Identifying Misinformation on Social Media

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 69 publications
0
4
0
Order By: Relevance
“…Additional cons that they discussed included that they want to think for themselves, unassisted by anyone (N=4), a sentiment that has been reported in prior work as well [48], and a worry that users may become accustomed to taking all content with a green checkmark next to it as credible, and not fact-check content for themselves (N=1).…”
Section: (N=3)mentioning
confidence: 99%
See 2 more Smart Citations
“…Additional cons that they discussed included that they want to think for themselves, unassisted by anyone (N=4), a sentiment that has been reported in prior work as well [48], and a worry that users may become accustomed to taking all content with a green checkmark next to it as credible, and not fact-check content for themselves (N=1).…”
Section: (N=3)mentioning
confidence: 99%
“…Restricting the showing of assessments to topics of interest (N=1) sees assessments on content more often can be dealt with in two ways. One is using an AI that learns assessments from a select set of people (e.g., a user's trusted associates) and predicts how they would assess other similar content, such as the AI in [48]. Another is to explore whether a more extensive trust network can be built for each user by leveraging transitivity of trust.…”
Section: Incentivesmentioning
confidence: 99%
See 1 more Smart Citation
“…Concurrently with our work, it has been found that GPT-based explanations of content veracity can significantly reduce social media users' reported tendency to accept false claims (Hsu et al 2023), though they can be equally effective when used with malicious intent to generate deceptive explanations (Danry et al 2022). There have also been some early works that explore the use of personalization in AI fact-checking systems, such as Jahanbakhsh et al (2023), which examines the effects of a personalized AI prediction tool based on the user's own assessments, and Jhaver et al…”
Section: Automated Misinformation Mitigationmentioning
confidence: 99%