The ability to perceive the direction of whole-body motion during standing may be critical to maintaining balance and preventing a fall. Our first goal was to quantify kinesthetic perception of whole-body motion by estimating directional acuity thresholds of support-surface perturbations during standing. The directional acuity threshold to lateral deviations in backward support-surface motion healthy, young adults was quantified as 9.5 ± 2.4° using the psychometric method (n = 25 subjects). However, inherent limitations in the psychometric method, such as a large number of required trials and the predetermined stimulus set, may preclude wider use of this method in clinical populations. Our second goal was to validate an adaptive algorithm known as parameter estimation by sequential testing (PEST) as an alternative threshold estimation technique to minimize the required trial count without predetermined knowledge of the relevant stimulus space. The directional acuity threshold was estimated at 11.7° ± 3.8° from the PEST method (n = 11 of 25 subjects, psychometric threshold = 10.1 ± 3.1°) using only one-third the number of trials compared to the psychometric method. Furthermore, PEST estimates of the direction acuity threshold were highly correlated with the psychometric estimates across subjects (r = 0.93) suggesting that both methods provide comparable estimates of the perceptual threshold. Computational modeling of both techniques revealed similar variance in the estimated thresholds across simulations of about 1°. Our results suggest that the PEST algorithm can be used to more quickly quantify whole-body directional acuity during standing in individuals with balance impairments.
Over the past decade, advances in optics, displays, graphics, tracking, environment mapping, and audio have revolutionized technologies for extended reality (XR). The scope of XR has exploded in recent years, with applications spanning education, marketing, and remote work, as well as training for medicine, industry, and military. [1] All-day wearable XR displays are likely to reinvent computer interfaces in ways that rival the smartphone and personal computer, dramatically changing the way we interact with both the digital and physical worlds, as well as with other people. As we move toward a future in which seeing and hearing virtual objects is commonplace, we must also consider another important sensory aspect─touch.The sensation of touch is critical to our ability to interact with objects in the virtual world just as it is in the physical world, yet there remain significant challenges in synthesizing believable haptic interactions. The earliest haptic device designers proposed that for interactions to feel realistic, the haptic device must make free space feel free and must render stiff virtual objects. [2] These objectives led to the development of probe-based devices that exhibited low inertia and little to no backlash in their transmission mechanisms, and required anchoring to desktop surfaces so that world-grounded stiffnesses and resistance could be rendered to the user. XR haptic device designers are presented with yet more challenges. XR devices not only need to meet free space and stiffness criteria but must do so in an ungrounded, low encumbrance manner. So far, most efforts have focused on wireless haptic controllers, [3][4][5] fingertip displays, [6] and haptic gloves. [7] While these devices address the free-space consideration, they often cannot render virtual stiffnesses and, critically, prevent or degrade concurrent interaction with physical objects in all-day XR contexts. Soft, skin-like materials and devices may show promise in this way, imposing negligible physical burden on users, while delivering reliable sensations and sufficient forces to the skin [8][9][10] ; however, much of this technology is still under development and further from implementation.How can we render virtual stiffnesses without being grounded to the world or encumbering the hands? One method that has
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.