Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia 2022
DOI: 10.1145/3552466.3556531
|View full text |Cite
|
Sign up to set email alerts
|

Human Perception of Audio Deepfakes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(18 citation statements)
references
References 13 publications
2
16
0
Order By: Relevance
“…The histogram shows performance is relatively stable across the questions. This observation indicates participants do not improve throughout the task unless they have explicit feedback, as examined by Groh et al (2022) and Müller et al (2022). We quantitatively verified the result by conducting a chi-squared hypothesis test against the uniform distribution, which was not statistically significant (χ 2 = 6.19, p = .99).…”
Section: Participants Do Not Get Better Throughout the Task Without E...mentioning
confidence: 68%
See 3 more Smart Citations
“…The histogram shows performance is relatively stable across the questions. This observation indicates participants do not improve throughout the task unless they have explicit feedback, as examined by Groh et al (2022) and Müller et al (2022). We quantitatively verified the result by conducting a chi-squared hypothesis test against the uniform distribution, which was not statistically significant (χ 2 = 6.19, p = .99).…”
Section: Participants Do Not Get Better Throughout the Task Without E...mentioning
confidence: 68%
“…Similarly to Groh et al (2022), they found that feedback from the ML model improved human performance. In their experiment, Müller et al (2022) found that the difference between human and AI accuracy was about 10%. However, their study only used English-language clips, only presented one audio clip to participants at a time, and did not collect information about participant confidence.…”
Section: Related Work On Deepfake Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…By contrast, our present studies reveal that, in at least some cases, subjects can not only identify that adversarial clips contain speech, but can also comprehend the content of some adversarial audio attacks 9 . Other work has investigated human versus machine perception of “deepfake” stimuli, including both auditory and visual examples (Groh, Epstein, Firestone, & Picard, 2022; Müller, Markert, & Böttinger, 2021).…”
Section: Discussionmentioning
confidence: 99%