The 'Virtual Iraq' VR environment is designed to be an immersive tool foruse as an Exposure Therapy treatment tool for combat related PTSD. The application consists of a series of virtual scenarios designed to represent relevant contexts for VR exposure therapy, including city and desert road environments. In addition to the visual stimuli presented in the VR HMD, directional 3D audio, vibrotactile and olfactory stimuli of relevance can be delivered. Stimulus presentation is controlled by the clinician via a separate 'wizard of oz'; interface, with the clinician in full audio contact with the patient. The presentation at the conference will detail the results of our research and clinical treatment protocols as they stand at that time. Presently, an open clinical trial to evaluate our system's efficacy for PTSD treatment with military personnel is being conducted at the Naval Medical Center San Diego and at Ft. Lewis, Washington, and a randomized controlled trial comparing VR alone and VRcycloserine is in progress at Emory University. Ten other test sites are now on line between now and the conference addressing a variety of research questions involving assessment of PTSD, physiological markers of the disorder, impact of multiple trauma events, and an fMRI study. Thus far, eight male and female treatment completers (out of 11) at two of the treatment sites have shown clinically significant improvements at posttreatment, with these patients now no longer meeting PTSD criteria. Due to the challenges for treatment of this disorder, we are encouraged by these early results. The Visual Computing of Projector-Camera SystemsBimber,O. AbstractTheir increasing capabilities and declining cost make video projectors widespread and established presentation tools. Being able to generate images that are larger than the actual display device virtually anywhere is an interesting feature for many applications that cannot be provided by desktop screens. Several research groups discover this potential by applying projectors in unconventional ways to develop new and innovative information displays that go beyond simple screen presentations. Todays projectors are able to modulate the displayed images spatially and temporally. Synchronized camera feedback is analyzed to support a real-time image correction that enables projections on complex everyday surfaces that are not bound to projector-optimized canvases or dedicated screen configurations. In this talk I will give an overview over our projector-camera-based image correction techniques for geometric warping, radiometric compensation, reduction of global illumination (such as inter-reflections) or view-dependent effects (such as specular reflections), increasing focal depth, and embedding imperceptible codes with a single or with multiple projection units. Thereby, GPU-based real-time rendering and computer vision on graphics hardware are tightly coupled. Such techniques have proved to be useful tools for many real-world applications. Examples include ad-hoc stereoscopic VR/...
Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired -as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between projectors and a camera to account for many illumination aspects, such as interreflections, refractions, shadows, and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.
Acquiring transparent, refractive objects is challenging as these kinds of objects can only be observed by analyzing the distortion of reference background patterns. We present a new, single image approach to reconstructing thin transparent surfaces, such as thin solids or surfaces of fluids. Our method is based on observing the distortion of light field background illumination. Light field probes have the potential to encode up to four dimensions in varying colors and intensities: spatial and angular variation on the probe surface; commonly employed reference patterns are only two-dimensional by coding either position or angle on the probe. We show that the additional information can be used to reconstruct refractive surface normals and a sparse set of control points from a single photograph.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.