Motion blur due to camera shake is one of the predominant sources of degradation in handheld photography. Single image blind deconvolution (BD) or motion deblurring aims at restoring a sharp latent image from the blurred recorded picture without knowing the camera motion that took place during the exposure. BD is a long-standing problem, but has attracted much attention recently, cumulating in several algorithms able to restore photos degraded by real camera motion in high quality. In this paper, we present a benchmark dataset for motion deblurring that allows quantitative performance evaluation and comparison of recent approaches featuring non-uniform blur models. To this end, we record and analyse real camera motion, which is played back on a robot platform such that we can record a sequence of sharp images sampling the six dimensional camera motion trajectory. The goal of deblurring is to recover one of these sharp images, and our dataset contains all information to assess how closely various algorithms approximate that goal. In a comprehensive comparison, we evaluate state-of-the-art single image BD algorithms incorporating uniform and non-uniform blur models
It is typically assumed that basic features of human gait are determined by purely biomechanical factors. In two experiments, we test whether gait transition speed and preferred walking speed are also influenced by visual information about the speed of self-motion. The visual flow during treadmill locomotion was manipulated to be slower than, matched to, or faster than the physical gait speed (visual gains of 0.5, 1.0, 2.0). Higher flow rates elicit significantly lower transition speeds for both the Walk-Run and Run-Walk transition, as expected. Similarly, higher flow rates elicit significantly lower preferred walking speeds. These results suggest that visual information becomes calibrated to mechanical or energetic aspects of gait and contributes to the control of locomotor behavior.
Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant's experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups.
Few HMD-based virtual environment systems display a rendering of the users own body. Subjectively, this often leads to a sense of disembodiment in the virtual world. We explore the effect of being able to see ones own body in such systems on an objective measure of the accuracy of one form of space perception. Using an action-based response measure, we found that participants who explored near space while seeing fully-articulated and tracked visual representation of themselves subsequently made more accurate judgments of absolute egocentric distance to locations ranging from 4m to 6m away from where they were standing than did participants who saw no avatar. A non-animated avatar also improved distance judgments, but by a lesser amount. Participants who viewed either animated or static avatars positioned 3m in front of their own position made subsequent distance judgments with similar accuracy to the participants who viewed the equivalent animated or static avatar positioned at their own location. We discuss the implications of these results on theories of embodied perception in virtual environments
A number of investigators have reported that distance judgments in virtual environments (VEs) are systematically smaller than distance judgments made in comparably-sized real environments. Many variables that may contribute to this difference have been investigated but none of them fully explain the distance compression. One approach to this problem that has implications for both VE applications and the study of perceptual mechanisms is to examine the influence of the feedback available to the user. Most generally, we asked whether feedback within a virtual environment would lead to more accurate estimations of distance. Next, given the prediction that some change in behavior would be observed, we asked whether specific adaptation effects would generalize to other indications of distance. Finally, we asked whether these effects would transfer from the VE to the real world. All distance judgments in the head-mounted display (HMD) became near accurate after three different forms of feedback were given within the HMD. However, not all feedback sessions within the HMD altered real world distance judgments. These results are discussed with respect to the perceptual and cognitive mechanisms that may be involved in the observed adaptation effects as well as the benefits of feedback for VE applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.