2023
DOI: 10.1093/cybsec/tyad011
|View full text |Cite
|
Sign up to set email alerts
|

Testing human ability to detect ‘deepfake’ images of human faces

Abstract: ‘Deepfakes’ are computationally created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020, a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(4 citation statements)
references
References 59 publications
0
4
0
Order By: Relevance
“…Moreover, AI-synthetized faces are less variable in facial shape, i.e., show lower morphological disparity, and have lower levels of facial asymmetry that natural faces do. From the perspective of objectively quantifiable morphometric measurements, artificial and natural faces are still distinguishable, although people cannot see these differences (Bray et al 2023;Lago et al 2022;Nightingale and Farid 2022;Rossi et al 2022;Tucciarelli et al 2022).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, AI-synthetized faces are less variable in facial shape, i.e., show lower morphological disparity, and have lower levels of facial asymmetry that natural faces do. From the perspective of objectively quantifiable morphometric measurements, artificial and natural faces are still distinguishable, although people cannot see these differences (Bray et al 2023;Lago et al 2022;Nightingale and Farid 2022;Rossi et al 2022;Tucciarelli et al 2022).…”
Section: Discussionmentioning
confidence: 99%
“…In this study, we wanted to investigate whether the mean morphometric features -including symmetry and shape variance, here measured as morphological disparity -of AI-generated faces are the same as those of natural faces. Unlike several previous studies (Bray, Johnson, and Kleinberg 2023;Lago et al 2022;Nightingale and Farid 2022;Rossi et al 2022;Tucciarelli et al 2022), we used standardized synthetized faces with a neutral expression and compared them to natural faces selected from our database of standardized facial portraits. Recent studies have shown that humans are no longer able to distinguish artificially generated facial stimuli from portrait photographs of real human beings.…”
Section: Introductionmentioning
confidence: 99%
“…While the existing literature provides insightful discussions on many types and techniques of cyber-attacks, it notably overlooks emerging threats, which could be vital for local governments. Emerging threats in Internet security include deepfake technology and AI-powered social engineering attacks, which can lead to misinformation or the manipulation of public opinion [92,93]. In web security, newer threats such as API attacks [94] and crypto-jacking that target web-based applications are often overlooked [95,96].…”
Section: Types and Techniques Of Cyber-attacksmentioning
confidence: 99%
“…This prompts the question—is AI‐generated content ecologically valid? Behavioural research confirms that it is difficult for humans to detect high‐quality AI‐generated images, suggesting they accurately portray real people, places and things (e.g., Bray et al, 2022; Korshunov & Marcel, 2020; Lu et al, 2023; Shen et al, 2021). The ultimate test might be whether real and AI‐generated faces elicit similar neural responses, given human expertise in face perception (Haxby et al, 2000).…”
Section: Introductionmentioning
confidence: 99%