Phased Arrays of Transducers (PATs) allow accurate control of ultrasound fields, with applications in haptics, levitation (i.e. displays) and parametric audio. However, algorithms for multi-point levitation or tactile feedback are usually limited to computing solutions in the order of hundreds of sound-fields per second, preventing the use of multiple high-speed points, a feature that can broaden the scope of applications of PATs. We present
GS-PAT
, a GPU multi-point phase retrieval algorithm, capable of computing 17K solutions per second for up to 32 simultaneous points in a mid-end consumer grade GPU (NVidia GTX 1660). We describe the algorithm and compare it to state of the art multi-point algorithms used for ultrasound haptics and levitation, showing similar quality of the generated sound-fields, and much higher computation rates. We then illustrate how the shift in paradigm enabled by
GS-PAT
(i.e. real-time control of several high-speed points) opens new applications for PAT technologies, such as in volumetric fully coloured displays, multi-point spatio-temporal tactile feedback, parametric audio and simultaneous combinations of these modalities.
3D selection in dense VR environments (e.g., point clouds) is extremely challenging due to occlusion and imprecise mid-air input modalities (e.g., 3D controllers and hand gestures). In this paper, we propose "Slicing-Volume", a hybrid selection technique that enables simultaneous 3D interaction in mid-air, and a 2D penand-tablet metaphor in VR. Inspired by well-known slicing plane techniques in data visualization, our technique consists of a 3D volume that encloses target objects in mid-air, which are then projected to a 2D tablet view for precise selection on a tangible physical surface. While slicing techniques and tablets-in-VR have been previously explored, in this paper, we evaluated the potential of this hybrid approach to improve accuracy in highly occluded selection tasks, comparing different multimodal interactions (e.g., Mid-air, Virtual Tablet and Real Tablet). Our results showed that our hybrid technique significantly improved overall accuracy of selection compared to Mid-air selection only, thanks to the added haptic feedback given by the physical tablet surface, rather than the added visualization given by the tablet view.
Phased arrays of transducers have been quickly evolving in terms of software and hardware with applications in haptics (acoustic vibrations), display (levitation) and audio. Most recently, Multimodal Particle-based Displays (MPDs) have even demonstrated volumetric content that can be seen, heard, and felt simultaneously, without additional instrumentation. However, current software tools only support individual modalities and they do not address the integration and exploitation of the multimodal potential of MPDs. This is because there is no standardized presentation pipeline tackling the challenges related to presenting such kind of multi-modal content (e.g., multi-modal support, multi-rate synchronization at 10 KHz, visual rendering or synchronization and continuity). This paper presents OpenMPD, a low-level presentation engine that deals with these challenges and allows structured exploitation of any type of MPD content (i.e., visual, tactile, audio). We characterize OpenMPD’s performance and illustrate how it can be integrated into higher-level development tools (i.e., Unity game engine). We then illustrate its ability to enable novel presentation capabilities, such as support of multiple MPD contents, dexterous manipulations of fast-moving particles or novel swept-volume MPD content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.