Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.
Psychophysical methods are widely used in neuroscience to investigate the quantitative relation between a physical property of the world and its perceptual representation provided by the senses. Recent studies introduced the Generalized Linear Mixed Model (GLMM) to fit the responses of multiple participants in psychophysical experiments. Another approach (two-level approach) requires fitting psychometric functions to each individual participant data using a Generalized Linear Model (GLM), and then testing the hypotheses on the multiple participants by means of a second level analysis. For either options, the implementation of the statistical analysis in R is possible and beneficial. Here, we introduce the package MixedPsy to model and fit psychometric data in R, either with two-level and GLMM approaches. The package, freely available in the CRAN repository, uses different methods for the estimation of Point of Subjective Equivalence (PSE) and Just Noticeable Difference (JND), and provides utilities for immediate visualization and plotting of the fitted results. This manuscript aims to provide researchers with a practical tutorial for implementing a complete analysis pipeline for psychophysical data using MixedPsy and other packages and basic functionalities of the R programming environment.
Over the last few years online platforms for running psychology experiments beyond simple questionnaires and surveys have become increasingly popular. This trend has especially increased after many laboratory facilities had to temporarily avoid in-person data collection following COVID-19-related lockdown regulations. Yet, while offering a valid alternative to in-person experiments in many cases, platforms for online experiments are still not a viable solution for a large part of human-based behavioral research. Two situations in particular pose challenges: First, when the research question requires design features or participant interaction which exceed the customization capability provided by the online platform; and second, when variation among hardware characteristics between participants results in an inadmissible confounding factor. To mitigate the effects of these limitations, we developed ReActLab (Remote Action Laboratory), a framework for programming remote, browser-based experiments using freely available and open-source JavaScript libraries. Since the experiment is run entirely within the browser, our framework allows for portability to any operating system and many devices. In our case, we tested our approach by running experiments using only a specific model of Android tablet. Using ReActLab with this standardized hardware allowed us to optimize our experimental design for our research questions, as well as collect data outside of laboratory facilities without introducing setup variation among participants. In this paper, we describe our framework and show examples of two different experiments carried out with it: one consisting of a visuomotor adaptation task, the other of a visual localization task. Through comparison with results obtained from similar tasks in in-person laboratory settings, we discuss the advantages and limitations for developing browser-based experiments using our framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.