Augmented reality (AR) technologies function to ‘augment’ normal perception by superimposing virtual objects onto an agent’s visual field. The philosophy of augmented reality is a small but growing subfield within the philosophy of technology. Existing work in this subfield includes research on the phenomenology of augmented experiences, the metaphysics of virtual objects, and different ethical issues associated with AR systems, including (but not limited to) issues of privacy, property rights, ownership, trust, and informed consent. This paper addresses some epistemological issues posed by AR systems. I focus on a near-future version of AR technology called the Real-World Web, which promises to radically transform the nature of our relationship to digital information by mixing the virtual with the physical. I argue that the Real-World Web (RWW) threatens to exacerbate three existing epistemic problems in the digital age: the problem of digital distraction, the problem of digital deception, and the problem of digital divergence. The RWW is poised to present new versions of these problems in the form of what I call the augmented attention economy, augmented skepticism, and the problem of other augmented minds. The paper draws on a range of empirical research on AR and offers a phenomenological analysis of virtual objects as perceptual affordances to help ground and guide the speculative nature of the discussion. It also considers a few policy-based and designed-based proposals to mitigate the epistemic threats posed by AR technology.
This chapter focuses on AI-based cognitive and perceptual enhancements. AI-based brain enhancements are already under development, and they may become commonplace over the next 30–50 years. We raise doubts concerning whether radical AI-based enhancements transhumanists advocate will accomplish the transhumanists goals of longevity, human flourishing, and intelligence enhancement. We urge that even if the technologies are medically safe and are not used as tools by surveillance capitalism or an authoritarian dictatorship, these enhancements may still fail to do their job for philosophical reasons. In what follows, we explore one such concern, a problem that involves the nature of the self. We illustrate that the so called transhumanist efforts to “merge oneself with AI” could lead to perverse realizations of AI technology, such as the demise of the person who sought enhancement. And, in a positive vein, we offer ways to avoid this, at least within the context of one theory of the nature of personhood.
How does the integration of mixed reality devices into our cognitive practices impact the mind from a metaphysical and epistemological perspective? In his innovative and interdisciplinary article, “Minds in the Metaverse: Extended Cognition Meets Mixed Reality” (2022), Paul Smart addresses this underexplored question, arguing that the use of a hypothetical application of the Microsoft HoloLens called “the HoloFoldit” represents a technologically high-grade form of extended cognizing from the perspective of neo-mechanical philosophy. This short commentary aims to (1) carve up the conceptual landscape of possible objections to Smart’s argument and (2) elaborate on the possibility of hologrammatically extended cognition, which is supposed to be one of the features of the HoloFoldit case that distinguishes it from more primitive forms of cognitive extension. In tackling (1), I do not mean to suggest that Smart does not consider or have sufficient answers to these objections. In addressing (2), the goal is not to argue for or against the possibility of hologrammatically extended cognition but to reveal some issues in the metaphysics of virtual reality upon which this possibility hinges. I construct an argument in favor of hologrammatically extended cognition based on the veracity of virtual realism (Chalmers, 2017) and an argument against it based on the veracity of virtual fictionalism (McDonnell and Wildman, 2019).
This paper offers a novel argument against the phenomenal intentionality thesis (or PIT for short). The argument, which I'll call the extended mind argument against phenomenal intentionality, is centered around two claims: the first asserts that some source intentional states extend into the environment, while the second maintains that no conscious states extend into the environment. If these two claims are correct, then PIT is false, for PIT implies that the extension of source intentionality is predicated upon the extension of phenomenal consciousness. The argument is important because it undermines an increasingly prominent account of the nature of intentionality. PIT has entered the philosophical mainstream and is now a serious contender to naturalistic views of intentionality like the tracking theory and the functional role theory (Loar 1987, 2003; Searle 1990; Strawson 1994; Horgan and Tienson 2002; Pitt 2004; Farkas 2008; Kriegel 2013; Montague 2016; Bordini 2017; Forrest 2017; Mendelovici 2018). The extended mind argument against PIT challenges the popular sentiment that consciousness grounds intentionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.