2022
DOI: 10.1073/pnas.2216035119
|View full text |Cite
|
Sign up to set email alerts
|

Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms

Abstract: Since their emergence a few years ago, artificial intelligence (AI)-synthesized media—so-called deep fakes—have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campaigns. Of particular concern is how deep fakes will be weaponized against world leaders during election cycles or times of armed conflict. We describe an identity-based approach for protecting world leaders from deep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 15 publications
1
8
0
Order By: Relevance
“…On this thinking, DALL•E's capacity to produce works that observers tend to prefer over human artworks can be explained by the fact that its training dataset comprises images of artworks that are broadly considered to be aesthetically pleasing. This is in line with recent research on 'deepfakes', wherein AI-generated faces not only fool observers with their hyper-realistic nature, but are also associated with enhanced perceptions of trustworthiness 2,14,15 . These findings raise critical concerns about the exact nature of the cognitive processes that could be targeted and manipulated using generative-AI, and therefore, about its large-scale deployment without detailed investigation.…”
Section: Discussionsupporting
confidence: 89%
“…On this thinking, DALL•E's capacity to produce works that observers tend to prefer over human artworks can be explained by the fact that its training dataset comprises images of artworks that are broadly considered to be aesthetically pleasing. This is in line with recent research on 'deepfakes', wherein AI-generated faces not only fool observers with their hyper-realistic nature, but are also associated with enhanced perceptions of trustworthiness 2,14,15 . These findings raise critical concerns about the exact nature of the cognitive processes that could be targeted and manipulated using generative-AI, and therefore, about its large-scale deployment without detailed investigation.…”
Section: Discussionsupporting
confidence: 89%
“…In this study, we focused on still images. However, for videos, additional indicators beyond those outlined here, such as motion and optical flow, as well as the synchronization of lip, face, and head movements over time, can also be significant factors [3]. Educating individuals on the cues outlined in this paper may aid in combatting deepfake proliferation.…”
Section: Discussionmentioning
confidence: 94%
“…Tested on several hours of authentic video, the identification-based deepfake detection approach was found to capture distinct gestures, facial and voice features. In this way, the use of deepfakes against world leaders during elections or during armed conflicts is minimized (Boháček and Farid, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…An increased number of cases of cyberattacks, which may have military or political goals, has been noted. Farid (2021), Boháček and Farid (2022) considered issues related to state and individual protection measures against deepfakes, ethical use of deepfakes as a weapon. Particular concern about the use of deepfakes against world leaders during armed conflicts has been noted.…”
Section: Literature Reviewmentioning
confidence: 99%