In recent years, a number of philosophers have turned their attention to developments in Artificial Intelligence, and in particular to deepfakes. A deepfake is a portmanteau of ‘deep learning' and ‘fake', and for the most part they are videos which depict people doing and saying things they never did. As a result, much of the emerging literature on deepfakes has turned on questions of trust, harms, and information-sharing. In this paper, I add to the emerging concerns around deepfakes by drawing on resources from vice epistemology. As deepfakes become more sophisticated, I claim, they will develop to be a source of online epistemic corruption. More specifically, they will encourage consumers of digital online media to cultivate and manifest various epistemic vices. My immediate focus in this paper is on their propensity to encourage the development of what I call ‘intellectual cynicism'. After sketching a rough account of this epistemic vice, I go on to suggest that we can partially offset such cynicism – and fears around deceptive online media more generally – by encouraging the development what I term a trained ‘digital sensibility'. This, I contend, involves a calibrated sensitivity to the epistemic merits (and demerits) of online content.
Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given that barn cases have posed a long-standing challenge for virtue-theoretic accounts of knowledge, I consider whether a similar challenge extends to deepfakes. In doing so, I consider how Duncan Pritchard’s recent anti-risk virtue epistemology meets the challenge. While Pritchard’s account avoids problems in traditional barn cases, I claim that it leads to local scepticism about knowledge from online videos in the case of deepfakes. I end by considering how two alternative virtue-theoretic approaches might vindicate our epistemic dependence on videos in an increasingly digital world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.