Tasks that require less physical effort are generally preferred over more physically demanding alternatives. Similarly, tasks that require less mental effort are generally preferred over more mentally demanding alternatives. But what happens when one must choose between tasks that entail different kinds of effort, one mainly physical (e.g., carrying buckets) and the other mainly mental (e.g., counting)? We asked participants to choose between a bucket-carrying task and a counting task. Our participants were less likely to choose the bucket task when it required a long reach rather than a short reach, and our participants were also less likely to choose the bucket task the smaller the final count value. We tested the hypothesis that subjective task durations provided a common currency for comparing the difficulties of the two kinds of tasks. We found that this hypothesis provided a better account of the task choice data than did an account that relied on objective task durations. Our study opens the door to a new problem in the study of attention, perception, and psychophysics-judging the difficulty of different kinds of tasks. The approach we took to this problem, which relies on two-alternative forced choice along with modeling the basis for the choice, may prove useful in future investigations.
Although there are many virtual reality (VR) applications in sports, only a handful of studies visualized the whole body. There is still a lack of understanding, how much of the own body must be visualized in the head-mounted display (HMD) based VR, to ensure fidelity and similar performance outcome as in the real-world. In the current study, 20 young and healthy participants completed three tasks in a real and virtual environment: balance task, grasping task, and throwing task with a ball. The aim was to find out the meaning of the visualization of different body parts for the quality of movement execution and to derive future guidelines for virtual body presentation. In addition, a comparison of human performance between reality and VR, with whole-body visualization was made. Focusing on the main goal of the current study, there were differences within the measured parameters due to the visualization of different body parts. In the balance task, the differences within the VR body visualization consisted mainly through nobody visualization (NB) compared to the other visualization types defined as whole-body (WB), WB except feet (NF), as well as WB except feet and legs (NLF). In the grasping task, the different body visualization seemed to have no impact on the participants' performances. In the throwing task, the whole-body visualization led to higher accuracy compared to the other visualization types. Regarding the comparison between the conditions, we found significant differences between reality and VR, which had a large effect on the parameters time for completion in the balance and grasping task, the number of foot strikes on the beam in the balance task, as well as the subjective estimation of the difficulty for all tasks. However, the number of errors and the quality of the performances did not differ significantly. The current study was the first study comparing sportsrelated tasks in VR and reality with further manipulations (occlusions of body parts) of the virtual body. For studies analyzing perception and sports performance or for VR sports interventions, we recommend the visualization of the whole body in real-time.
Virtual reality (VR) is popular across many fields and is increasingly used in sports as a training tool. The reason, therefore, is recently improved display technologies, more powerful computation capacity, and lower costs of head-mounted displays for VR. As in the real-world (R), visual effects are the most important stimulus provided by VR. However, it has not been demonstrated whether the gaze behavior would achieve the same level in VR as in R. This information will be important for the development of applications or software in VR. Therefore, several tasks were designed to analyze the gaze accuracy and gaze precision using eye-tracking devices in R and VR. 21 participants conducted three eye-movement tasks in sequence: gaze at static targets, tracking a moving target, and gaze at targets at different distances. To analyze the data, an averaged distance with root mean square was calculated between the coordinates of each target and the recorded gaze points for each task. In gaze accuracy, the results showed no significant differences between R and VR in gaze at static targets (1 m distance, p > 0.05) and small significant differences at targets placed at different distances (p < 0.05), as well as large differences in tracking the moving target (p < 0.05). The precision in VR is significantly worse compared to R in all tasks with static gaze targets (p < 0.05). On the whole, this study gives a first insight into comparing foveal vision, especially gaze accuracy and precision between R and VR, and can, therefore, serve as a reference for the development of VR applications in the future.
Despite the increased use in sports, it is still unclear to what extent VR training tools can be applied for motor learning of complex movements. Previous VR studies primarily relate to realize performances rather than learning motor skills. Therefore, the current study compared VR with video training realizing the acquisition of karate technique, the Soto Uke moving forward in Zenkutsu Dachi, without being accompanied by a trainer or partner. Further analyses showed whether a less lavished forearm compared to a whole-body visualization in VR is necessary to acquire movements’ basics sufficiently. Four groups were tested: 2 groups conducted VR training (VR-WB: whole-body visualization, and VR-FA having only visualized the forearms), the third group passed through a video-based learning method (VB), and the control group (C) had no intervention. In consultation with karate experts, a scoring system was developed to determine the movements’ quality divided, into upper- and lower body performance and the fist pose. The three-way ANOVA with repeated measurements, including the between-subject factor group [VR-WB, VR-FA, VB, C] and the within-subject factors time [pre, post, retention] and body regions [upper body, lower body, fist pose], shows that all groups improved significantly (except for C) with the similar course after four training sessions in all body regions. Accordingly, VR training seems to be as effective as video training, and the transfer from VR-adapted skills into the natural environment was equally sufficient, although presenting different body visualization types. Further suggestions are made related to the features of future VR training simulations.
Virtual reality (VR) is a promising tool and is increasingly used in many different fields, in which virtual walking can be generalized through detailed modeling of the physical environment such as in sports science, medicine and furthermore. However, the visualization of a virtual environment using a head-mounted display (HMD) differs compared to reality, and it is still not clear whether the visual perception works equally within VR. The purpose of the current study is to compare the spatial orientation between real world (RW) and VR. Therefore, the participants had to walk blindfolded to different placed objects in a real and virtual environment, which did not differ in physical properties. They were equipped with passive markers to track the position of the back of their hand, which was used to specify each object’s location. The first task was to walk blindfolded from one starting position to different placed sport-specific objects requiring different degrees of rotation after observing them for 15 s (0°, 45°, 180°, and 225°). The three-way ANOVA with repeated measurements indicated no significant difference between RW and VR within the different degrees of rotation (p > 0.05). In addition, the participants were asked to walk blindfolded three times from a new starting position to two objects, which were ordered differently during the conditions. Except for one case, no significant differences in the pathways between RW and VR were found (p > 0.05). This study supports that the use of VR ensures similar behavior of the participants compared to real-world interactions and its authorization of use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.