Sound can levitate objects of different sizes and materials through air, water and tissue. This allows us to manipulate cells, liquids, compounds or living things without touching or contaminating them. However, acoustic levitation has required the targets to be enclosed with acoustic elements or had limited manoeuvrability. Here we optimize the phases used to drive an ultrasonic phased array and show that acoustic levitation can be employed to translate, rotate and manipulate particles using even a single-sided emitter. Furthermore, we introduce the holographic acoustic elements framework that permits the rapid generation of traps and provides a bridge between optical and acoustical trapping. Acoustic structures shaped as tweezers, twisters or bottles emerge as the optimum mechanisms for tractor beams or containerless transportation. Single-beam levitation could manipulate particles inside our body for applications in targeted drug delivery or acoustically controlled micro-machines that do not interfere with magnetic resonance imaging.
Figure 1: Examples of data physicalizations: (left) population density map of Mexico City co-created by Richard Burdett and exhibited at the Tate Modern (photo by Stefan Geens), (center) similar data shown on an actuated display from the MIT Media Lab [70], and (right) spherical particles suspended by acoustic levitation [61]. All images are copyright to their respective owners. ABSTRACTPhysical representations of data have existed for thousands of years. Yet it is now that advances in digital fabrication, actuated tangible interfaces, and shape-changing displays are spurring an emerging area of research that we call Data Physicalization. It aims to help people explore, understand, and communicate data using computer-supported physical data representations. We call these representations physicalizations, analogously to visualizations -their purely visual counterpart. In this article, we go beyond the focused research questions addressed so far by delineating the research area, synthesizing its open challenges, and laying out a research agenda.
Mental-Imagery based Brain-Computer Interfaces (MI-BCIs) allow their users to send commands to a computer using their brain-activity alone (typically measured by ElectroEncephaloGraphy—EEG), which is processed while they perform specific mental tasks. While very promising, MI-BCIs remain barely used outside laboratories because of the difficulty encountered by users to control them. Indeed, although some users obtain good control performances after training, a substantial proportion remains unable to reliably control an MI-BCI. This huge variability in user-performance led the community to look for predictors of MI-BCI control ability. However, these predictors were only explored for motor-imagery based BCIs, and mostly for a single training session per subject. In this study, 18 participants were instructed to learn to control an EEG-based MI-BCI by performing 3 MI-tasks, 2 of which were non-motor tasks, across 6 training sessions, on 6 different days. Relationships between the participants’ BCI control performances and their personality, cognitive profile and neurophysiological markers were explored. While no relevant relationships with neurophysiological markers were found, strong correlations between MI-BCI performances and mental-rotation scores (reflecting spatial abilities) were revealed. Also, a predictive model of MI-BCI performance based on psychometric questionnaire scores was proposed. A leave-one-subject-out cross validation process revealed the stability and reliability of this model: it enabled to predict participants’ performance with a mean error of less than 3 points. This study determined how users’ profiles impact their MI-BCI control ability and thus clears the way for designing novel MI-BCI training protocols, adapted to the profile of each user.
General rightsThis document is made available in accordance with publisher policies. Please cite only the published version using the reference above. AbstractWe present a method for creating three-dimensional haptic shapes in mid-air using focused ultrasound. This approach applies the principles of acoustic radiation force, whereby the non-linear effects of sound produce forces on the skin which are strong enough to generate tactile sensations. This mid-air haptic feedback eliminates the need for any attachment of actuators or contact with physical devices. The user perceives a discernible haptic shape when the corresponding acoustic interference pattern is generated above a precisely controlled two-dimensional phased array of ultrasound transducers. In this paper, we outline our algorithm for controlling the volumetric distribution of the acoustic radiation force field in the form of a three-dimensional shape. We demonstrate how we create this acoustic radiation force field and how we interact with it. We then describe our implementation of the system and provide evidence from both visual and technical evaluations of its ability to render different shapes. We conclude with a subjective user evaluation to examine users' performance for different shapes.
Sriram (2019)A volumetric display for visual, tactile and audio presentation using acoustic trapping. Nature, 575 (7782). pp. 320-323.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.