While developing a story, novices and published writers alike have had to look outside themselves for inspiration. Language models have recently been able to generate text fluently, producing new stochastic narratives upon request. However, effectively integrating such capabilities with human cognitive faculties and creative processes remains challenging. We propose to investigate this integration with a multimodal writing support interface that offers writing suggestions textually, visually, and aurally. We conduct an extensive study that combines elicitation of prior expectations before writing, observation and semi-structured interviews during writing, and outcome evaluations after writing. Our results illustrate individual and situational variation in machine-in-the-loop writing approaches, suggestion acceptance, and ways the system is helpful. Centrally, we report how participants perform integrative leaps , by which they do cognitive work to integrate suggestions of varying semantic relevance into their developing stories. We interpret these findings, offering modeling and design recommendations for future creative writing support technologies.
We explore the application of a wide range of sensory stimulation technologies to the area of sleep and dream engineering. We begin by emphasizing the causal role of the body in dream generation, and describe a circuitry between the sleeping body and the dreaming mind. We suggest that nearly any sensory stimuli has potential for modulating experience in sleep. Considering other areas that might afford tools for engineering sensory content in simulated worlds, we turn to Virtual Reality (VR). We outline a collection of relevant VR technologies, including devices engineered to stimulate haptic, temperature, vestibular, olfactory, and auditory sensations. We believe these technologies, which have been developed for high mobility and low cost, can be translated to the field of dream engineering. We close by discussing possible future directions in this field and the ethics of a world in which targeted dream direction and sleep manipulation are feasible.
Accessibility, adaptability, and transparency of Brain-Computer Interface (BCI) tools and the data they collect will likely impact how we collectively navigate a new digital age. This discussion reviews some of the diverse and transdisciplinary applications of BCI technology and draws speculative inferences about the ways in which BCI tools, combined with machine learning (ML) algorithms may shape the future. BCIs come with substantial ethical and risk considerations, and it is argued that open source principles may help us navigate complex dilemmas by encouraging experimentation and making developments public as we build safeguards into this new paradigm. Bringing open-source principles of adaptability and transparency to BCI tools can help democratize the technology, permitting more voices to contribute to the conversation of what a BCI-driven future should look like. Open-source BCI tools and access to raw data, in contrast to black-box algorithms and limited access to summary data, are critical facets enabling artists, DIYers, researchers and other domain experts to participate in the conversation about how to study and augment human consciousness. Looking forward to a future in which augmented and virtual reality become integral parts of daily life, BCIs will likely play an increasingly important role in creating closed-loop feedback for generative content. Brain-computer interfaces are uniquely situated to provide artificial intelligence (AI) algorithms the necessary data for determining the decoding and timing of content delivery. The extent to which these algorithms are open-source may be critical to examine them for integrity, implicit bias, and conflicts of interest.
The sense of agency (SoA) describes the feeling of being the author and in control of one's movements. It is closely linked to automated aspects of sensorimotor control and understood to depend on one's ability to monitor the details of one's movements. As such SoA has been argued to be a critical component of self-awareness in general and contribute to presence in virtual reality environments in particular. A common approach to investigating SoA is to ask participants to perform goal-directed movements and introducing spatial or temporal visuomotor mismatches in the feedback. Feedback movements are traditionally either switched with someone else's movements using a 2D video-feed or modified by providing abstracted feedback about one's actions on a computer screen. The aim of the current study was therefore to quantify conscious monitoring and the SoA for ecologically (more) valid, three dimensional feedback of the participants' actual limb and movements. This was achieved by displaying an Infra-Red (IR) feed of the participants' upper limbs in an augmented virtuality environment (AVE) using a headmounted display (HMD). Movements could be fed back in realtime (46ms system delay) or with an experimental delay of up to 570ms. As hypothesized, participant's SoA decreased with increasing temporal visuomotor mismatches (p<.001), replicating previous findings and extending them to AVEs. In-line with this literature, we report temporal limits of 222±60ms (50% psychometric threshold) in N=28 participants. Our results demonstrate the validity of the experimental platform by replicating studies in SoA both qualitatively and quantitatively. We discuss our findings in relation to the use of virtual and mixed reality in research and implications for neurorehabilitation therapies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.