2022
DOI: 10.54501/jots.v1i4.56
|View full text |Cite
|
Sign up to set email alerts
|

Creating, Using, Misusing, and Detecting Deep Fakes

Abstract: Synthetic media—so-called deep fakes—have captured the imagination of some and struck fear in others. Although they vary in their form and creation, deep fakes refer to text, image, audio, or video that has been automatically synthesized by a machine-learning system. Deep fakes are the latest in a long line of techniques used to manipulate reality, yet their introduction poses new opportunities and risks due to the democratized access to what would have historically been the purview of Hollywood-style studios.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(14 citation statements)
references
References 51 publications
0
14
0
Order By: Relevance
“…If one's concern is the use of generative AI for deceit, it seems exceedingly unlikely that people who are trying to deceive will opt in to selfdisclosure. Methods for detecting synthetic media via machine learning [14], crowdsourcing [14], and digital forensics [8] offer possibilities for active classification, although as generative AI technology continues to advance, it will likely get harder and harder to detect generated content. More generally, the question of who gets to decide what content gets to count as AI-generated is central, and for labels to be effective, it is critical for the process to be percieved as legitmate [22].…”
Section: Discussion Limitations and Future Workmentioning
confidence: 99%
“…If one's concern is the use of generative AI for deceit, it seems exceedingly unlikely that people who are trying to deceive will opt in to selfdisclosure. Methods for detecting synthetic media via machine learning [14], crowdsourcing [14], and digital forensics [8] offer possibilities for active classification, although as generative AI technology continues to advance, it will likely get harder and harder to detect generated content. More generally, the question of who gets to decide what content gets to count as AI-generated is central, and for labels to be effective, it is critical for the process to be percieved as legitmate [22].…”
Section: Discussion Limitations and Future Workmentioning
confidence: 99%
“…Our training used astic gradient descent with 0.9 momening rate schedule (decreasing the learny 8 epochs). Polyak averaging [13] was nal model used at inference time.…”
Section: Methodsmentioning
confidence: 99%
“…A "deep fake" of a sensor output is an artificially constructed "realistic looking" fake sensor output obtained using a deep generator architecture, such that it is not possible to distinguish it from the real sensor data [12] . Deep fakes have been successfully constructed for different types of sensor data including voice, video, and image [13] . With the possibility of having deep fakes being used as inputs to an AI inference system, the foundational assumption of the AI algorithms powering these "smart systems" have been demolished.…”
Section: Introductionmentioning
confidence: 99%
“…Continued consumer interest in deepfakes is reflected in the proliferation of dedicated deepfake sites and forums, often depicting celebrity targets. While deepfakes can be used in beneficial ways for accessibility and creativity [19,26], abuse potential has increased in recent years as the technology has advanced in sophistication and availability [12,34,53,80]. Deepfakes can be weaponized and used for malicious purposes, including financial fraud, disinformation dissemination, cyberbullying, and sexual extortion ("sextortion") [4,26].…”
Section: Introductionmentioning
confidence: 99%