2022
DOI: 10.48550/arxiv.2202.12883
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Human Detection of Political Deepfakes across Transcripts, Audio, and Video

Abstract: Recent advances in technology for hyper-realistic visual effects provoke the concern that deepfake videos of political speeches will soon be visually indistinguishable from authentic video recordings. Yet there exists little empirical research on how audio-visual information influences people's susceptibility to fall for political misinformation. The conventional wisdom in the field of communication research predicts that people will fall for fake news more often when the same version of a story is presented a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 60 publications
0
6
0
Order By: Relevance
“…Thus, future work should address the extent to which judgements are influenced by prior information on, or familiarity with deepfakes could affect observers' performance -and potentially interact with individual differences in face identity processing ability. Finally, facial deepfakes may be combined with audio content, which in isolation can facilitate or hamper deepfake detection performance [27]. Potentially, the detection of deepfakes involving both audio and visual information could relate to stable individual differences in multisensory integration.…”
Section: Limitations and Future Outlookmentioning
confidence: 99%
“…Thus, future work should address the extent to which judgements are influenced by prior information on, or familiarity with deepfakes could affect observers' performance -and potentially interact with individual differences in face identity processing ability. Finally, facial deepfakes may be combined with audio content, which in isolation can facilitate or hamper deepfake detection performance [27]. Potentially, the detection of deepfakes involving both audio and visual information could relate to stable individual differences in multisensory integration.…”
Section: Limitations and Future Outlookmentioning
confidence: 99%
“…In the context of AIgenerated warning labels, recent work has show that explaining how the label was created can increase the labels' efficacy [5]. For their experiments on the human detection of political deepfakes, [15] include a disclosure stating the content "contains AI-generated content" but do not test the efficacy of that particular disclosure statement.…”
Section: Warning Labels To Mitigate Misinformationmentioning
confidence: 99%
“…A recent study, however, challenges this conventional wisdom (Groh, Sankaranarayanan, and Picard 2022). After seeing or reading a short political statement by either Joe Biden or Donald Trump, participants were asked to determine if the statement could be attributed to either individual.…”
Section: Disinformationmentioning
confidence: 99%